129 research outputs found
On the regularization and optimization in quantum detector tomography
Quantum detector tomography (QDT) is a fundamental technique for calibrating
quantum devices and performing quantum engineering tasks. In this paper, we
utilize regularization to improve the QDT accuracy whenever the probe states
are informationally complete or informationally incomplete. In the
informationally complete scenario, without regularization, we optimize the
resource (probe state) distribution by converting it to a semidefinite
programming problem. Then in both the informationally complete and
informationally incomplete scenarios, we discuss different regularization forms
and prove the mean squared error scales as or tends to a
constant with state copies under the static assumption. We also
characterize the ideal best regularization for the identifiable parameters,
accounting for both the informationally complete and informationally incomplete
scenarios. Numerical examples demonstrate the effectiveness of different
regularization forms and a quantum optical experiment test shows that a
suitable regularization form can reach a reduced mean squared error.Comment: 19 pages, 10 figure
Quantum Natural Gradient for Variational Bayes
Variational Bayes (VB) is a critical method in machine learning and
statistics, underpinning the recent success of Bayesian deep learning. The
natural gradient is an essential component of efficient VB estimation, but it
is prohibitively computationally expensive in high dimensions. We propose a
hybrid quantum-classical algorithm to improve the scaling properties of natural
gradient computation and make VB a truly computationally efficient method for
Bayesian inference in highdimensional settings. The algorithm leverages matrix
inversion from the linear systems algorithm by Harrow, Hassidim, and Lloyd
[Phys. Rev. Lett. 103, 15 (2009)] (HHL). We demonstrate that the matrix to be
inverted is sparse and the classical-quantum-classical handoffs are
sufficiently economical to preserve computational efficiency, making the
problem of natural gradient for VB an ideal application of HHL. We prove that,
under standard conditions, the VB algorithm with quantum natural gradient is
guaranteed to converge. Our regression-based natural gradient formulation is
also highly useful for classical VB
Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives
Part 2 of this monograph builds on the introduction to tensor networks and
their operations presented in Part 1. It focuses on tensor network models for
super-compressed higher-order representation of data/parameters and related
cost functions, while providing an outline of their applications in machine
learning and data analytics. A particular emphasis is on the tensor train (TT)
and Hierarchical Tucker (HT) decompositions, and their physically meaningful
interpretations which reflect the scalability of the tensor network approach.
Through a graphical approach, we also elucidate how, by virtue of the
underlying low-rank tensor approximations and sophisticated contractions of
core tensors, tensor networks have the ability to perform distributed
computations on otherwise prohibitively large volumes of data/parameters,
thereby alleviating or even eliminating the curse of dimensionality. The
usefulness of this concept is illustrated over a number of applied areas,
including generalized regression and classification (support tensor machines,
canonical correlation analysis, higher order partial least squares),
generalized eigenvalue decomposition, Riemannian optimization, and in the
optimization of deep neural networks. Part 1 and Part 2 of this work can be
used either as stand-alone separate texts, or indeed as a conjoint
comprehensive review of the exciting field of low-rank tensor networks and
tensor decompositions.Comment: 232 page
Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives
Part 2 of this monograph builds on the introduction to tensor networks and
their operations presented in Part 1. It focuses on tensor network models for
super-compressed higher-order representation of data/parameters and related
cost functions, while providing an outline of their applications in machine
learning and data analytics. A particular emphasis is on the tensor train (TT)
and Hierarchical Tucker (HT) decompositions, and their physically meaningful
interpretations which reflect the scalability of the tensor network approach.
Through a graphical approach, we also elucidate how, by virtue of the
underlying low-rank tensor approximations and sophisticated contractions of
core tensors, tensor networks have the ability to perform distributed
computations on otherwise prohibitively large volumes of data/parameters,
thereby alleviating or even eliminating the curse of dimensionality. The
usefulness of this concept is illustrated over a number of applied areas,
including generalized regression and classification (support tensor machines,
canonical correlation analysis, higher order partial least squares),
generalized eigenvalue decomposition, Riemannian optimization, and in the
optimization of deep neural networks. Part 1 and Part 2 of this work can be
used either as stand-alone separate texts, or indeed as a conjoint
comprehensive review of the exciting field of low-rank tensor networks and
tensor decompositions.Comment: 232 page
Quantum Detector and Process Tomography: Algorithm Design and Optimisation
This thesis develops new algorithms and investigates optimisation in quantum detector tomography (QDT) and quantum process tomography (QPT).
QDT is a fundamental technique for calibrating quantum devices and performing quantum engineering tasks. We design optimal probe states based on the minimum upper bound of the mean squared error (UMSE) and the maximum robustness. We establish the lower bounds of the UMSE and the condition number for the probe states, and provide concrete examples that can achieve these lower bounds. In order to enhance the estimation precision, we also propose a two-step adaptive QDT and present a sufficient condition on when the infidelity scales where is the number of state copies.
We then utilize regularization to improve the QDT accuracy whenever the probe states are informationally complete or informationally incomplete. We discuss different regularization forms and prove the mean squared error scales as or tends to a constant with state copies under the static assumption. We also characterize the ideal best regularization for the identifiable parameters.
QPT is a critical task for characterizing the dynamics of quantum systems and achieving precise quantum control. We firstly study the identification of time-varying decoherence rates for open quantum systems. We expand the unknown decoherence rates into Fourier series and take the expansion coefficients as optimisation variables. We then convert it into a minimax problem and apply sequential linear programming technique to solve it.
For general QPT, we propose a two-stage solution (TSS) for both trace-preserving and non-trace-preserving QPT. Using structure simplification, our algorithm has computational complexity where is the dimension of the quantum system and , are the type numbers of different input states and measurement operators, respectively. We establish an analytical error upper bound and then design the optimal input states and the optimal measurement operators, which are both based on minimizing the error upper bound and maximizing the robustness characterized by the condition number.
A quantum optical experiment test shows that a suitable regularization form can reach a lower mean squared error in QDT and the testing on IBM quantum machine demonstrates the effectiveness of our TSS algorithm for QPT
First Development and Demonstration of Fiber Optic Bolometer
The fiber optic bolometer (FOB) was demonstrated observing a fusion plasma for the first time, and 2D fiber optic bolometer was developed and demonstrated to have high spatial resolution. The FOB is a novel type of a bolometer that is theoretically immune to EMI. A bolometer that is a sensor that measure the power of the incoming electromagnetic radiation. The most common bolometer used in fusion research is a resistive bolometer that utilize resistors in an electrical circuit. Due to high electromagnetic interferences (EMI) in fusion environment, noise can be a serious problem in determining accurate plasma radiation. The demonstration at DIII-D tokamak utilized a single-channel system having a measurement FOB and a reference FOB, which was blocked of incoming radiation. The demonstration showed negligible increase in noise in fusion environment and acceptable absolute-value comparisons with the resistive bolometers. Plasma radiations contain information relating to plasma phenomena, and the structures are unique depending on plasma conditions. 2D FOB array was designed to investigate plasma radiations near the divertor with higher resolutions more rigorously for DIII-D. The design parameters were optimized using the machine learning technique called Bayesian global optimization, which was efficient for the multivariate non-linear problem. A physics-based regularization was developed using a magnetic reconstruction profile for the DIII-D implementation with an iterative inversion method. Neural network inversion methods were developed to not depend on an arbitrary regularization strength and to do between-plasma-shot inversions, which could not overcome the problem of biasing on input data A new method of raw spectra data processing that used Fourier transform was developed for real time analysis. The design from the optimization was validated with several analysis methods to characterize the performance. The forward-modelled radiated power divided into different sections compared to the values from the original synthetic radiation profiles. The central location and shape of various radiation profiles were analyzed and compared to the original values using a computer vision library. The regularized iterative methods worked well. The results demonstrated that the optimized 2D FOB array system will be able to answer important questions relating plasma radiation structures
Classification before regression for improving the accuracy of glucose quantification using absorption spectroscopy
This work contributes to the improvement of glucose quantification using near-infrared (NIR), mid-infrared (MIR), and combination of NIR and MIR absorbance spectroscopy by classifying the spectral data prior to the application of regression models. Both manual and automated classification are presented based on three homogeneous classes defined following the clinical definition of the glycaemic ranges (hypoglycaemia, euglycaemia, and hyperglycaemia). For the manual classification, partial least squares and principal component regressions are applied to each class separately and shown to lead to improved quantification results compared to when applying the same regression models for the whole dataset. For the automatic classification, linear discriminant analysis coupled with principal component analysis is deployed, and regressions are applied to each class separately. The results obtained are shown to outperform those of regressions for the entire dataset
- …