253 research outputs found
Noise-Robust End-to-End Quantum Control using Deep Autoregressive Policy Networks
Variational quantum eigensolvers have recently received increased attention,
as they enable the use of quantum computing devices to find solutions to
complex problems, such as the ground energy and ground state of
strongly-correlated quantum many-body systems. In many applications, it is the
optimization of both continuous and discrete parameters that poses a formidable
challenge. Using reinforcement learning (RL), we present a hybrid policy
gradient algorithm capable of simultaneously optimizing continuous and discrete
degrees of freedom in an uncertainty-resilient way. The hybrid policy is
modeled by a deep autoregressive neural network to capture causality. We employ
the algorithm to prepare the ground state of the nonintegrable quantum Ising
model in a unitary process, parametrized by a generalized quantum approximate
optimization ansatz: the RL agent solves the discrete combinatorial problem of
constructing the optimal sequences of unitaries out of a predefined set and, at
the same time, it optimizes the continuous durations for which these unitaries
are applied. We demonstrate the noise-robust features of the agent by
considering three sources of uncertainty: classical and quantum measurement
noise, and errors in the control unitary durations. Our work exhibits the
beneficial synergy between reinforcement learning and quantum control
GHuNeRF: Generalizable Human NeRF from a Monocular Video
In this paper, we tackle the challenging task of learning a generalizable
human NeRF model from a monocular video. Although existing generalizable human
NeRFs have achieved impressive results, they require muti-view images or videos
which might not be always available. On the other hand, some works on
free-viewpoint rendering of human from monocular videos cannot be generalized
to unseen identities. In view of these limitations, we propose GHuNeRF to learn
a generalizable human NeRF model from a monocular video of the human performer.
We first introduce a visibility-aware aggregation scheme to compute vertex-wise
features, which is used to construct a 3D feature volume. The feature volume
can only represent the overall geometry of the human performer with
insufficient accuracy due to the limited resolution. To solve this, we further
enhance the volume feature with temporally aligned point-wise features using an
attention mechanism. Finally, the enhanced feature is used for predicting
density and color for each sampled point. A surface-guided sampling strategy is
also introduced to improve the efficiency for both training and inference. We
validate our approach on the widely-used ZJU-MoCap dataset, where we achieve
comparable performance with existing multi-view video based approaches. We also
test on the monocular People-Snapshot dataset and achieve better performance
than existing works when only monocular video is used.Comment: Corrected typo
- …