5 research outputs found
On proving the robustness of algorithms for early fault-tolerant quantum computers
The hope of the quantum computing field is that quantum architectures are
able to scale up and realize fault-tolerant quantum computing. Due to
engineering challenges, such "cheap" error correction may be decades away. In
the meantime, we anticipate an era of "costly" error correction, or early
fault-tolerant quantum computing. Costly error correction might warrant
settling for error-prone quantum computations. This motivates the development
of quantum algorithms which are robust to some degree of error as well as
methods to analyze their performance in the presence of error. We introduce a
randomized algorithm for the task of phase estimation and give an analysis of
its performance under two simple noise models. In both cases the analysis leads
to a noise threshold, below which arbitrarily high accuracy can be achieved by
increasing the number of samples used in the algorithm. As an application of
this general analysis, we compute the maximum ratio of the largest circuit
depth and the dephasing scale such that performance guarantees hold. We
calculate that the randomized algorithm can succeed with arbitrarily high
probability as long as the required circuit depth is less than 0.916 times the
dephasing scale.Comment: 27 pages, 3 figures, 1 table, 1 algorithm. To be submitted to QIP
202
Quantum circuit fidelity estimation using machine learning
The computational power of real-world quantum computers is limited by errors.
When using quantum computers to perform algorithms which cannot be efficiently
simulated classically, it is important to quantify the accuracy with which the
computation has been performed. In this work we introduce a
machine-learning-based technique to estimate the fidelity between the state
produced by a noisy quantum circuit and the target state corresponding to ideal
noise-free computation. Our machine learning model is trained in a supervised
manner, using smaller or simpler circuits for which the fidelity can be
estimated using other techniques like direct fidelity estimation and quantum
state tomography. We demonstrate that, for simulated random quantum circuits
with a realistic noise model, the trained model can predict the fidelities of
more complicated circuits for which such methods are infeasible. In particular,
we show the trained model may make predictions for circuits with higher degrees
of entanglement than were available in the training set, and that the model may
make predictions for non-Clifford circuits even when the training set included
only Clifford-reducible circuits. This empirical demonstration suggests
classical machine learning may be useful for making predictions about
beyond-classical quantum circuits for some non-trivial problems.Comment: 27 pages, 6 figure
An Auction Based TATKAL Scheme For Indian Railway
The Indian railways works on three trains local, passenger and express. Express trains require reservation of two types normal and TATKAL scheme, but these schemes works on the FCFS (First Come First Serve) basis. The project auction based TATKAL scheme uses bidding, admin decide the time then session will sta rts. The highest base price is decided depends on that passenger will bid the, each passenger having maximum five times bidding after session expires and the maximum bid times complete then passenger cant do bidding. This is useful to those passengers who having emergency and who can able to pay highest price for ticket. Depends on highest bid value tickets are allocated to the passenger. This paper security is provide d at the time user login OTP (One Time Password) is used
On Regenerating Codes and Proactive Secret Sharing: Relationships and Implications
We look at two basic coding theoretic and cryptographic mechanisms developed separately and investigate relationships between them and their implications. The first mechanism is Proactive Secret Sharing (PSS), which allows randomization and repair of shares using information from other shares. PSS enables constructing secure multi-party computation protocols that can withstand mobile dynamic attacks.
This self-recovery and the redundancy of uncorrupted shares allows a system to overcome recurring faults throughout its lifetime, eventually finishing the computation (or continuing forever to maintain stored data). The second mechanismis Regenerating Codes (RC) which were extensively studied and adopted in distributed storage systems. RC are error correcting (or erasure handling) codes capable of recovering a block of a distributively held codeword from other servers\u27 blocks. This self-healing nature enables more robustness of a code distributed over different machines. Given that the two mechanisms have a built-in self-healing (leading to stabilizing) and that both can be based on Reed Solomon Codes, it is natural to formally investigate deeper relationships between them.
We prove that a PSS scheme can be converted into an RC scheme, and that under some conditions RC can be utilized to instantiate a PSS scheme. This allows us, in turn, to leverage recent results enabling more efficient polynomial interpolation (due to Guruswami and Wooters) to improve the efficiency of a PSS scheme. We also show that if parameters are not carefully calibrated, such interpolation techniques (allowing partial word leakage) may be used to attack a PSS scheme over time.
Secondly, the above relationships give rise to extended (de)coding notions. Our first example is mapping the generalized capabilities of adversaries (called generalized adversary structures) from the PSS realm into the RC one. Based on this we define a new variant of RC we call Generalized-decoding Regenerating Code (GRC) where not all network servers have a uniform sub-codeword (motivated by non-uniform probability of attacking different servers case). We finally
highlight several interesting research directions due to our results, e.g., designing new improved GRC, and more adaptive RC re-coding techniques
Land cover clustering and classification of satellite images
Land cover classification refers to the process of using remote sensing data to categorize different types of land cover like vegetation, water bodies and soil. This is helpful for gaining key information about the surface of the Earth and for the future interactions between human activities and the environment. These predicted interactions lead to the development of sustainable land use practices along with the protection of natural resources. This paper deals with classifying the land cover using unsupervised and supervised methods. The unsupervised method includes land cover detection using a K-means clustering algorithm and the supervised classification is done using random forest classifier. The evaluation parameter values are calculated and compared for the input and output images