770 research outputs found
Demonstration of Adiabatic Variational Quantum Computing with a Superconducting Quantum Coprocessor
Adiabatic quantum computing enables the preparation of many-body ground
states. This is key for applications in chemistry, materials science, and
beyond. Realisation poses major experimental challenges: Direct analog
implementation requires complex Hamiltonian engineering, while the digitised
version needs deep quantum gate circuits. To bypass these obstacles, we suggest
an adiabatic variational hybrid algorithm, which employs short quantum circuits
and provides a systematic quantum adiabatic optimisation of the circuit
parameters. The quantum adiabatic theorem promises not only the ground state
but also that the excited eigenstates can be found. We report the first
experimental demonstration that many-body eigenstates can be efficiently
prepared by an adiabatic variational algorithm assisted with a multi-qubit
superconducting coprocessor. We track the real-time evolution of the ground and
exited states of transverse-field Ising spins with a fidelity up that can reach
about 99%.Comment: 12 pages, 4 figure
Witnessing eigenstates for quantum simulation of Hamiltonian spectra
The efficient calculation of Hamiltonian spectra, a problem often intractable
on classical machines, can find application in many fields, from physics to
chemistry. Here, we introduce the concept of an "eigenstate witness" and
through it provide a new quantum approach which combines variational methods
and phase estimation to approximate eigenvalues for both ground and excited
states. This protocol is experimentally verified on a programmable silicon
quantum photonic chip, a mass-manufacturable platform, which embeds entangled
state generation, arbitrary controlled-unitary operations, and projective
measurements. Both ground and excited states are experimentally found with
fidelities >99%, and their eigenvalues are estimated with 32-bits of precision.
We also investigate and discuss the scalability of the approach and study its
performance through numerical simulations of more complex Hamiltonians. This
result shows promising progress towards quantum chemistry on quantum computers.Comment: 9 pages, 4 figures, plus Supplementary Material [New version with
minor typos corrected.
Uncertainty quantification in medical image synthesis
Machine learning approaches to medical image synthesis have shown
outstanding performance, but often do not convey uncertainty information. In this chapter, we survey uncertainty quantification methods in
medical image synthesis and advocate the use of uncertainty for improving clinicians’ trust in machine learning solutions. First, we describe basic
concepts in uncertainty quantification and discuss its potential benefits in
downstream applications. We then review computational strategies that
facilitate inference, and identify the main technical and clinical challenges.
We provide a first comprehensive review to inform how to quantify, communicate and use uncertainty in medical synthesis applications
Dense Vision in Image-guided Surgery
Image-guided surgery needs an efficient and effective camera tracking system in order to perform augmented reality for overlaying preoperative models or label cancerous tissues on the 2D video images of the surgical scene. Tracking in endoscopic/laparoscopic scenes however is an extremely difficult task primarily due to tissue deformation, instrument invasion into the surgical scene and the presence of specular highlights. State of the art feature-based SLAM systems such as PTAM fail in tracking such scenes since the number of good features to track is very limited. When the scene is smoky and when there are instrument motions, it will cause feature-based tracking to fail immediately.
The work of this thesis provides a systematic approach to this problem using dense vision. We initially attempted to register a 3D preoperative model with multiple 2D endoscopic/laparoscopic images using a dense method but this approach did not perform well. We subsequently proposed stereo reconstruction to directly obtain the 3D structure of the scene. By using the dense reconstructed model together with robust estimation, we demonstrate that dense stereo tracking can be incredibly robust even within extremely challenging endoscopic/laparoscopic scenes.
Several validation experiments have been conducted in this thesis. The proposed stereo reconstruction algorithm has turned out to be the state of the art method for several publicly available ground truth datasets. Furthermore, the proposed robust dense stereo tracking algorithm has been proved highly accurate in synthetic environment (< 0.1 mm RMSE) and qualitatively extremely robust when being applied to real scenes in RALP prostatectomy surgery. This is an important step toward achieving accurate image-guided laparoscopic surgery.Open Acces
Latent Disentanglement for the Analysis and Generation of Digital Human Shapes
Analysing and generating digital human shapes is crucial for a wide variety of applications ranging from movie production to healthcare. The most common approaches for the analysis and generation of digital human shapes involve the creation of statistical shape models. At the heart of these techniques is the definition of a mapping between shapes and a low-dimensional representation. However, making these representations interpretable is still an open challenge. This thesis explores latent disentanglement as a powerful technique to make the latent space of geometric deep learning based statistical shape models more structured and interpretable. In particular, it introduces two novel techniques to disentangle the latent representation of variational autoencoders and generative adversarial networks with respect to the local shape attributes characterising the identity of the generated body and head meshes. This work was inspired by a shape completion framework that was proposed as a viable alternative to intraoperative registration in minimally invasive surgery of the liver. In addition, one of these methods for latent disentanglement was also applied to plastic surgery, where it was shown to improve the diagnosis of craniofacial syndromes and aid surgical planning
Sparse image reconstruction on the sphere: implications of a new sampling theorem
We study the impact of sampling theorems on the fidelity of sparse image
reconstruction on the sphere. We discuss how a reduction in the number of
samples required to represent all information content of a band-limited signal
acts to improve the fidelity of sparse image reconstruction, through both the
dimensionality and sparsity of signals. To demonstrate this result we consider
a simple inpainting problem on the sphere and consider images sparse in the
magnitude of their gradient. We develop a framework for total variation (TV)
inpainting on the sphere, including fast methods to render the inpainting
problem computationally feasible at high-resolution. Recently a new sampling
theorem on the sphere was developed, reducing the required number of samples by
a factor of two for equiangular sampling schemes. Through numerical simulations
we verify the enhanced fidelity of sparse image reconstruction due to the more
efficient sampling of the sphere provided by the new sampling theorem.Comment: 11 pages, 5 figure
Last Layer Marginal Likelihood for Invariance Learning
Data augmentation is often used to incorporate inductive biases into models.
Traditionally, these are hand-crafted and tuned with cross validation. The
Bayesian paradigm for model selection provides a path towards end-to-end
learning of invariances using only the training data, by optimising the
marginal likelihood. We work towards bringing this approach to neural networks
by using an architecture with a Gaussian process in the last layer, a model for
which the marginal likelihood can be computed. Experimentally, we improve
performance by learning appropriate invariances in standard benchmarks, the low
data regime and in a medical imaging task. Optimisation challenges for
invariant Deep Kernel Gaussian processes are identified, and a systematic
analysis is presented to arrive at a robust training scheme. We introduce a new
lower bound to the marginal likelihood, which allows us to perform inference
for a larger class of likelihood functions than before, thereby overcoming some
of the training challenges that existed with previous approaches
- …