8,219 research outputs found
Overhead and noise threshold of fault-tolerant quantum error correction
Fault tolerant quantum error correction (QEC) networks are studied by a
combination of numerical and approximate analytical treatments. The probability
of failure of the recovery operation is calculated for a variety of CSS codes,
including large block codes and concatenated codes. Recent insights into the
syndrome extraction process, which render the whole process more efficient and
more noise-tolerant, are incorporated. The average number of recoveries which
can be completed without failure is thus estimated as a function of various
parameters. The main parameters are the gate (gamma) and memory (epsilon)
failure rates, the physical scale-up of the computer size, and the time t_m
required for measurements and classical processing. The achievable computation
size is given as a surface in parameter space. This indicates the noise
threshold as well as other information. It is found that concatenated codes
based on the [[23,1,7]] Golay code give higher thresholds than those based on
the [[7,1,3]] Hamming code under most conditions. The threshold gate noise
gamma_0 is a function of epsilon/gamma and t_m; example values are
{epsilon/gamma, t_m, gamma_0} = {1, 1, 0.001}, {0.01, 1, 0.003}, {1, 100,
0.0001}, {0.01, 100, 0.002}, assuming zero cost for information transport. This
represents an order of magnitude increase in tolerated memory noise, compared
with previous calculations, which is made possible by recent insights into the
fault-tolerant QEC process.Comment: 21 pages, 12 figures, minor mistakes corrected and layout improved,
ref added; v4: clarification of assumption re logic gate
The financial clouds review
This paper demonstrates financial enterprise portability, which involves moving entire application services from desktops to clouds and between different clouds, and is transparent to users who can work as if on their familiar systems. To demonstrate portability, reviews for several financial models are studied, where Monte Carlo Methods (MCM) and Black Scholes Model (BSM) are chosen. A special technique in MCM, Least Square Methods, is used to reduce errors while performing accurate calculations. The coding algorithm for MCM written in MATLAB is explained. Simulations for MCM are performed on different types of Clouds. Benchmark and experimental results are presented for discussion. 3D Black Scholes are used to explain the impacts and added values for risk analysis, and three different scenarios with 3D risk analysis are explained. We also discuss implications for banking and ways to track risks in order to improve accuracy. We have used a conceptual Cloud platform to explain our contributions in Financial Software as a Service (FSaaS) and the IBM Fined Grained Security Framework. Our objective is to demonstrate portability, speed, accuracy and reliability of applications in the clouds, while demonstrating portability for FSaaS and the Cloud Computing Business Framework (CCBF), which is proposed to deal with cloud portability
The Fibonacci scheme for fault-tolerant quantum computation
We rigorously analyze Knill's Fibonacci scheme for fault-tolerant quantum
computation, which is based on the recursive preparation of Bell states
protected by a concatenated error-detecting code. We prove lower bounds on the
threshold fault rate of .67\times 10^{-3} for adversarial local stochastic
noise, and 1.25\times 10^{-3} for independent depolarizing noise. In contrast
to other schemes with comparable proved accuracy thresholds, the Fibonacci
scheme has a significantly reduced overhead cost because it uses postselection
far more sparingly.Comment: 24 pages, 10 figures; supersedes arXiv:0709.3603. (v2): Additional
discussion about the overhead cos
- …