3,371 research outputs found
Recommended from our members
Methane Production by Methanogens In Simulated Subsurface Martian Environments
Methane has a typical atmospheric photochemical lifetime of ∼300 years on Mars, making contemporary reported detections (and non-detections) of methane a fiercely debated topic, due to the potential need for a present-day source. On Earth, most methane is produced by methanogenic microbes present in, e.g., ruminants, wetlands, lakes and permafrost. Of the four metabolic pathways on Earth, the hydrogenotrophic pathway is the most common, utilising CO2 and H2 as substrates. Both gases are present on Mars, plus liquid water and essential elements (CHNOPS) that are requirements for life, and organics. Surface conditions on Mars are sterilising, however, the temperature and pressure of the subsurface are potentially favourable to life and provide a shield to sterilising surface conditions, and are thus a possible habitat for methanogens. A meta-analysis was
conducted, motivated by these subsurface parameters, that redefined the statistical representation for several growth parameters for all type-strains of methanogens and analysed multiple parameters
simultaneously across multiple categories (e.g. metabolism), showing that the optimal average conditions in which to grow methanogens would be a meso-temperate (20 to 39◦C), hypersaline and slightly acidic environment. Two martian subsurface environments were simulated to determine whether environmental or chemical factors are inhibitory to methanogenesis. (1) Methanoculleus marisnigri was grown in a custom-built, high-pressure manifold at 60 bar and 25◦C to simulate the subsurface of Mars, although no methane was produced, due to a technical issue resulting in oxygenated medium. However, some cells survived five weeks of oxygenation. (2) Methanothermococcus okinawensis was grown in a simulated chemical environment at 1 bar and 60◦C, that included a regolith simulant, a proposed martian brine and a Mars-relevant organic source (carbonaceous chondrite). The simulated parameters of the chemical environment of subsurface Mars were not inhibitory to hydrogenotrophic methanogenesis, suggesting it is feasible (from a metabolic perspective) that subsurface methanogens could be producing contemporary methane on Mars
Anisotropic Quantum Hall Droplets
We study two-dimensional (2D) droplets of non-interacting electrons in a
strong magnetic field, placed in a confining potential with arbitrary shape.
Using semiclassical methods adapted to the lowest Landau level, we show that
energy eigenstates are localized on level curves of the potential, with
position-dependent local widths and heights. This one-particle insight allows
us to deduce explicit formulas for many-body observables in the thermodynamic
limit: the droplet's density falls off at the boundary with an inhomogeneous
width inherited from the underlying wave functions, the many-body current
exhibits a Gaussian jump at the edge, and correlations along the edge are
long-ranged and inhomogeneous. We show that this is consistent with the
system's universal low-energy description as a free 1D chiral conformal field
theory of edge modes, known from earlier results in special geometries. Here,
the theory is homogeneous in terms of the canonical angle variable of the
potential, which follows from a delicate interplay between radial and angular
dependencies of the eigenfunctions. These results are likely to be observable
in solid-state systems or quantum simulators of 2D electron gases with a high
degree of control on the confining potential.Comment: 25 pages, 5 figures. v2: minor improvements + new appendix on
subleading correction
DeepMB: Deep neural network for real-time optoacoustic image reconstruction with adjustable speed of sound
Multispectral optoacoustic tomography (MSOT) is a high-resolution functional
imaging modality that can non-invasively access a broad range of
pathophysiological phenomena by quantifying the contrast of endogenous
chromophores in tissue. Real-time imaging is imperative to translate MSOT into
clinical imaging, visualize dynamic pathophysiological changes associated with
disease progression, and enable in situ diagnoses. Model-based reconstruction
affords state-of-the-art optoacoustic images; however, the image quality
provided by model-based reconstruction remains inaccessible during real-time
imaging because the algorithm is iterative and computationally demanding. Deep
learning affords faster reconstruction, but the lack of ground truth training
data can lead to reduced image quality for in vivo data. We introduce a
framework, termed DeepMB, that achieves accurate optoacoustic image
reconstruction for arbitrary input data in 31 ms per image by expressing
model-based reconstruction with a deep neural network. DeepMB facilitates
accurate generalization to experimental test data through training on signals
synthesized from real-world images and ground truth images generated by
model-based reconstruction. The framework affords in-focus images for a broad
range of anatomical locations because it supports dynamic adjustment of the
reconstruction speed of sound during imaging. Furthermore, DeepMB is compatible
with the data rates and image sizes of modern multispectral optoacoustic
tomography scanners. We evaluate DeepMB on a diverse dataset of in vivo images
and demonstrate that the framework reconstructs images 1000 times faster than
the iterative model-based reference method while affording near-identical image
qualities. Accurate and real-time image reconstructions with DeepMB can enable
full access to the high-resolution and multispectral contrast of handheld
optoacoustic tomography
Effects of Markovian noise and cavity disorders on the entanglement dynamics of double Jaynes-Cummings models
Dynamics of double Jaynes-Cummings models are studied in the presence of
Markovian noise and cavity disorders with specific attention to entanglement
sudden death and revivals. The study is focused on the glassy disorders, which
remain unchanged during the observations. The field is initially assumed to be
in a vacuum state, while the atoms are considered to be in a specific two-qubit
superposition state. Specifically, the study has revealed that the presence of
noise, or a nonlinear pump results in interesting behaviors in the entanglement
dynamics. Further, entanglement sudden death is observed in the presence of
Markovian noise and nonlinear pump. The presence of entanglement sudden deaths
and revivals have also been observed in cases where they were absent initially
for the chosen states. The effect of noise on the dynamics of the system is to
decay the characteristics, while that of the disorder is to wash them out. On
the other hand, the introduction of nonlinearity is found to cause the dynamics
of the system to speed up.Comment: Entanglement dynamics of variants of double Jaynes-Cummings models
are studie
Implementing any Linear Combination of Unitaries on Intermediate-term Quantum Computers
We develop three new methods to implement any Linear Combination of Unitaries
(LCU), a powerful quantum algorithmic tool with diverse applications. While the
standard LCU procedure requires several ancilla qubits and sophisticated
multi-qubit controlled operations, our methods consume significantly fewer
quantum resources. The first method (Single-Ancilla LCU) estimates expectation
values of observables with respect to any quantum state prepared by an LCU
procedure while requiring only a single ancilla qubit, and quantum circuits of
shorter depths. The second approach (Analog LCU) is a simple, physically
motivated, continuous-time analogue of LCU, tailored to hybrid qubit-qumode
systems. The third method (Ancilla-free LCU) requires no ancilla qubit at all
and is useful when we are interested in the projection of a quantum state
(prepared by the LCU procedure) in some subspace of interest. We apply the
first two techniques to develop new quantum algorithms for a wide range of
practical problems, ranging from Hamiltonian simulation, ground state
preparation and property estimation, and quantum linear systems. Remarkably,
despite consuming fewer quantum resources they retain a provable quantum
advantage. The third technique allows us to connect discrete and
continuous-time quantum walks with their classical counterparts. It also
unifies the recently developed optimal quantum spatial search algorithms in
both these frameworks, and leads to the development of new ones. Additionally,
using this method, we establish a relationship between discrete-time and
continuous-time quantum walks, making inroads into a long-standing open
problem.Comment: 72+16 pages, 3 Figure
Algorithmic Shadow Spectroscopy
We present shadow spectroscopy as a simulator-agnostic quantum algorithm for
estimating energy gaps using very few circuit repetitions (shots) and no extra
resources (ancilla qubits) beyond performing time evolution and measurements.
The approach builds on the fundamental feature that every observable property
of a quantum system must evolve according to the same harmonic components: we
can reveal them by post-processing classical shadows of time-evolved quantum
states to extract a large number of time-periodic signals ,
whose frequencies correspond to Hamiltonian energy differences with
Heisenberg-limited precision. We provide strong analytical guarantees that (a)
quantum resources scale as , while the classical computational
complexity is linear , (b) the signal-to-noise ratio increases with the
number of analysed signals as , and (c) peak frequencies
are immune to reasonable levels of noise. Moreover, performing shadow
spectroscopy to probe model spin systems and the excited state conical
intersection of molecular CH in simulation verifies that the approach is
intuitively easy to use in practice, robust against gate noise, amiable to a
new type of algorithmic-error mitigation technique, and uses orders of
magnitude fewer number of shots than typical near-term quantum algorithms -- as
low as 10 shots per timestep is sufficient. Finally, we measured a
high-quality, experimental shadow spectrum of a spin chain on readily-available
IBM quantum computers, achieving the same precision as in noise-free
simulations without using any advanced error mitigation.Comment: 31 pages, 13 figures, new results with hardware and figure
Modeling correlated uncertainties in stochastic compartmental models
We consider compartmental models of communicable disease with uncertain
contact rates. Stochastic fluctuations are often added to the contact rate to
account for uncertainties. White noise, which is the typical choice for the
fluctuations, leads to significant underestimation of the disease severity.
Here, starting from reasonable assumptions on the social behavior of
individuals, we model the contacts as a Markov process which takes into account
the temporal correlations present in human social activities. Consequently, we
show that the mean-reverting Ornstein-Uhlenbeck (OU) process is the correct
model for the stochastic contact rate. We demonstrate the implication of our
model on two examples: a Susceptibles-Infected-Susceptibles (SIS) model and a
Susceptibles-Exposed-Infected-Removed (SEIR) model of the COVID-19 pandemic. In
particular, we observe that both compartmental models with white noise
uncertainties undergo transitions that lead to the systematic underestimation
of the spread of the disease. In contrast, modeling the contact rate with the
OU process significantly hinders such unrealistic noise-induced transitions.
For the SIS model, we derive its stationary probability density analytically,
for both white and correlated noise. This allows us to give a complete
description of the model's asymptotic behavior as a function of its bifurcation
parameters, i.e., the basic reproduction number, noise intensity, and
correlation time. For the SEIR model, where the probability density is not
available in closed form, we study the transitions using Monte Carlo
simulations. Our study underscores the necessity of temporal correlations in
stochastic compartmental models and the need for more empirical studies that
would systematically quantify such correlations.Comment: 36 pages, 8 figure
Mining Butterflies in Streaming Graphs
This thesis introduces two main-memory systems sGrapp and sGradd for performing the fundamental analytic tasks of biclique counting and concept drift detection over a streaming graph. A data-driven heuristic is used to architect the systems. To this end, initially, the growth patterns of bipartite streaming graphs are mined and the emergence principles of streaming motifs are discovered. Next, the discovered principles are (a) explained by a graph generator called sGrow; and (b) utilized to establish the requirements for efficient, effective, explainable, and interpretable management and processing of streams. sGrow is used to benchmark stream analytics, particularly in the case of concept drift detection.
sGrow displays robust realization of streaming growth patterns independent of initial conditions, scale and temporal characteristics, and model configurations. Extensive evaluations confirm the simultaneous effectiveness and efficiency of sGrapp and sGradd. sGrapp achieves mean absolute percentage error up to 0.05/0.14 for the cumulative butterfly count in streaming graphs with uniform/non-uniform temporal distribution and a processing throughput of 1.5 million data records per second. The throughput and estimation error of sGrapp are 160x higher and 0.02x lower than baselines. sGradd demonstrates an improving performance over time, achieves zero false detection rates when there is not any drift and when drift is already detected, and detects sequential drifts in zero to a few seconds after their occurrence regardless of drift intervals
Pre-optimizing variational quantum eigensolvers with tensor networks
The variational quantum eigensolver (VQE) is a promising algorithm for
demonstrating quantum advantage in the noisy intermediate-scale quantum (NISQ)
era. However, optimizing VQE from random initial starting parameters is
challenging due to a variety of issues including barren plateaus, optimization
in the presence of noise, and slow convergence. While simulating quantum
circuits classically is generically difficult, classical computing methods have
been developed extensively, and powerful tools now exist to approximately
simulate quantum circuits. This opens up various strategies that limit the
amount of optimization that needs to be performed on quantum hardware. Here we
present and benchmark an approach where we find good starting parameters for
parameterized quantum circuits by classically simulating VQE by approximating
the parameterized quantum circuit (PQC) as a matrix product state (MPS) with a
limited bond dimension. Calling this approach the variational tensor network
eigensolver (VTNE), we apply it to the 1D and 2D Fermi-Hubbard model with
system sizes that use up to 32 qubits. We find that in 1D, VTNE can find
parameters for PQC whose energy error is within 0.5% relative to the ground
state. In 2D, the parameters that VTNE finds have significantly lower energy
than their starting configurations, and we show that starting VQE from these
parameters requires non-trivially fewer operations to come down to a given
energy. The higher the bond dimension we use in VTNE, the less work needs to be
done in VQE. By generating classically optimized parameters as the
initialization for the quantum circuit one can alleviate many of the challenges
that plague VQE on quantum computers.Comment: 10 page
Differentiable matrix product states for simulating variational quantum computational chemistry
Quantum Computing is believed to be the ultimate solution for quantum chemistry problems. Before the advent of large-scale, fully fault-tolerant quantum computers, the variational quantum eigensolver (VQE) is a promising heuristic quantum algorithm to solve real world quantum chemistry problems on near-term noisy quantum computers. Here we propose a highly parallelizable classical simulator for VQE based on the matrix product state representation of quantum state, which significantly extend the simulation range of the existing simulators. Our simulator seamlessly integrates the quantum circuit evolution into the classical auto-differentiation framework, thus the gradients could be computed efficiently similar to the classical deep neural network, with a scaling that is independent of the number of variational parameters. As applications, we use our simulator to study commonly used small molecules such as HF, HCl, LiH and HO, as well as larger molecules CO, BeH and H with up to qubits. The favorable scaling of our simulator against the number of qubits and the number of parameters could make it an ideal testing ground for near-term quantum algorithms and a perfect benchmarking baseline for oncoming large scale VQE experiments on noisy quantum computers
- …