31 research outputs found
Ultrahigh Error Threshold for Surface Codes with Biased Noise
We show that a simple modification of the surface code can exhibit an
enormous gain in the error correction threshold for a noise model in which
Pauli Z errors occur more frequently than X or Y errors. Such biased noise,
where dephasing dominates, is ubiquitous in many quantum architectures. In the
limit of pure dephasing noise we find a threshold of 43.7(1)% using a tensor
network decoder proposed by Bravyi, Suchara and Vargo. The threshold remains
surprisingly large in the regime of realistic noise bias ratios, for example
28.2(2)% at a bias of 10. The performance is in fact at or near the hashing
bound for all values of the bias. The modified surface code still uses only
weight-4 stabilizers on a square lattice, but merely requires measuring
products of Y instead of Z around the faces, as this doubles the number of
useful syndrome bits associated with the dominant Z errors. Our results
demonstrate that large efficiency gains can be found by appropriately tailoring
codes and decoders to realistic noise models, even under the locality
constraints of topological codes.Comment: 6 pages, 5 figures, comments welcome; v2 includes minor improvements
to the numerical results, additional references, and an extended discussion;
v3 published version (incorporating supplementary material into main body of
paper
Tailoring surface codes for highly biased noise
The surface code, with a simple modification, exhibits ultra-high error
correction thresholds when the noise is biased towards dephasing. Here, we
identify features of the surface code responsible for these ultra-high
thresholds. We provide strong evidence that the threshold error rate of the
surface code tracks the hashing bound exactly for all biases, and show how to
exploit these features to achieve significant improvement in logical failure
rate. First, we consider the infinite bias limit, meaning pure dephasing. We
prove that the error threshold of the modified surface code for pure dephasing
noise is , i.e., that all qubits are fully dephased, and this threshold
can be achieved by a polynomial time decoding algorithm. We demonstrate that
the sub-threshold behavior of the code depends critically on the precise shape
and boundary conditions of the code. That is, for rectangular surface codes
with standard rough/smooth open boundaries, it is controlled by the parameter
, where and are dimensions of the surface code lattice. We
demonstrate a significant improvement in logical failure rate with pure
dephasing for co-prime codes that have , and closely-related rotated
codes, which have a modified boundary. The effect is dramatic: the same logical
failure rate achievable with a square surface code and physical qubits can
be obtained with a co-prime or rotated surface code using only
physical qubits. Finally, we use approximate maximum likelihood decoding to
demonstrate that this improvement persists for a general Pauli noise biased
towards dephasing. In particular, comparing with a square surface code, we
observe a significant improvement in logical failure rate against biased noise
using a rotated surface code with approximately half the number of physical
qubits.Comment: 18+4 pages, 24 figures; v2 includes additional coauthor (ASD) and new
results on the performance of surface codes in the finite-bias regime,
obtained with beveled surface codes and an improved tensor network decoder;
v3 published versio
The role of conviction and narrative in decision-making under radical uncertainty
We propose conviction narrative theory (CNT) to broaden decision-making theory for it better to understand and analyse how subjectively means-end rational actors cope in contexts in which the traditional assumptions in decision-making models fail to hold. Conviction narratives enable actors to draw on their beliefs, causal models and rules of thumb to identify opportunities worth acting on, to simulate the future outcome of their actions and to feel sufficiently convinced to act. The framework focuses on how narrative and emotion combine to allow actors to deliberate and to select actions that they think will produce the outcomes they desire. It specifies connections between particular emotions and deliberative thought, hypothesizing that approach and avoidance emotions evoked during narrative simulation play a crucial role. Two mental states, Divided and Integrated, in which narratives can be formed or updated, are introduced and used to explain some familiar problems that traditional models cannot
Keys to success of a community of clinical practice in primary care : a qualitative evaluation of the ECOPIH project
The current reality of primary care (PC) makes it essential to have telemedicine systems available to facilitate communication between care levels. Communities of practice have great potential in terms of care and education, and that is why the Online Communication Tool between Primary and Hospital Care was created. This tool enables PC and non-GP specialist care (SC) professionals to raise clinical cases for consultation and to share information. The objective of this article is to explore healthcare professionals' views on communities of clinical practice (CoCPs) and the changes that need to be made in an uncontrolled real-life setting after more than two years of use. A descriptive-interpretative qualitative study was conducted on a total of 29 healthcare professionals who were users and non-users of a CoCP using 2 focus groups, 3 triangular groups and 5 individual interviews. There were 18 women, 21 physicians and 8 nurses. Of the interviewees, 21 were PC professionals, 24 were users of a CoCP and 7 held managerial positions. For a system of communication between PC and SC to become a tool that is habitually used and very useful, the interviewees considered that it would have to be able to find quick, effective solutions to the queries raised, based on up-to-date information that is directly applicable to daily clinical practice. Contact should be virtual - and probably collaborative - via a platform integrated into their habitual workstations and led by PC professionals. Organisational changes should be implemented to enable users to have more time in their working day to spend on the tool, and professionals should have a proactive attitude in order to make the most if its potential. It is also important to make certain technological changes, basically aimed at improving the tool's accessibility, by integrating it into habitual clinical workstations. The collaborative tool that provides reliable, up-to-date information that is highly transferrable to clinical practice is valued for its effectiveness, efficiency and educational capacity. In order to make the most of its potential in terms of care and education, organisational changes and techniques are required to foster greater use. The online version of this article (10.1186/s12875-018-0739-0) contains supplementary material, which is available to authorized users
Information and digital literacies; a review of concepts
A detailed literature reviewing, analysing the multiple and confusing concepts around the ideas of information literacy and digital literacy at the start of the millennium. The article was well-received, and is my most highly-cited work, with over 1100 citations
Local tensor-network codes
Tensor-network codes enable the construction of large stabilizer codes out of tensors describing smaller stabilizer codes. An application of tensor-network codes was an efficient and exact decoder for holographic codes. Here, we show how to write some topological codes, including the surface code and colour code, as simple tensor-network codes. We also show how to calculate distances of stabilizer codes by contracting a tensor network. The algorithm actually gives more information, including a histogram of all logical coset weights. We prove that this method is efficient in the case of stabilizer codes encoded via local log-depth circuits in one dimension and holographic codes. Using our tensor-network distance calculator, we find a modification of the rotated surface code that has the same distance but fewer minimum-weight logical operators by ‘doping’ the tensor network, i.e., we break the homogeneity of the tensor network by locally replacing tensors. For this example, this corresponds to an improvement in successful error correction of almost 2% against depolarizing noise (in the perfect-measurement setting), but comes at the cost of introducing three higher-weight stabilizers. Our general construction lets us pick a network geometry (e.g., a Euclidean lattice in the case of the surface code), and, using only a small set of seed codes (constituent tensors), build extensive codes with the potential for optimisation