179 research outputs found
Receipt for the Sale of 7 Enslaved Persons
Receipt for the sale from Alexander C. and Eliza M. McEwen to their daughter Elizabeth M. Featherston of seven enslaved persons named Julia Ann, Cornelius, Ilay Ann, Ellen, John, William, and Frances.https://egrove.olemiss.edu/lanternproject/1051/thumbnail.jp
Receipt for the Sale of 7 Enslaved Persons
Receipt for the sale from Alexander C. and Eliza M. McEwen to their daughter Elizabeth M. Featherston of seven enslaved persons named Julia Ann, Cornelius, Ilay Ann, Ellen, John, William, and Frances.https://scholarsjunction.msstate.edu/lantern-um/1052/thumbnail.jp
Lymphocyte subsets and the role of Th1/Th2 balance in stressed chronic pain patients
Background: The complex regional pain syndrome (CRPS) and fibromyalgia (FM) are chronic pain syndromes occurring in highly stressed individuals. Despite the known connection between the nervous system and immune cells, information on distribution of lymphocyte subsets under stress and pain conditions is limited. Methods: We performed a comparative study in 15 patients with CRPS type I, 22 patients with FM and 37 age- and sex-matched healthy controls and investigated the influence of pain and stress on lymphocyte number, subpopulations and the Th1/Th2 cytokine ratio in T lymphocytes. Results: Lymphocyte numbers did not differ between groups. Quantitative analyses of lymphocyte subpopulations showed a significant reduction of cytotoxic CD8+ lymphocytes in both CRPS (p < 0.01) and FM (p < 0.05) patients as compared with healthy controls. Additionally, CRPS patients were characterized by a lower percentage of IL-2-producing T cell subpopulations reflecting a diminished Th1 response in contrast to no changes in the Th2 cytokine profile. Conclusions: Future studies are warranted to answer whether such immunological changes play a pathogenetic role in CRPS and FM or merely reflect the consequences of a pain-induced neurohumoral stress response, and whether they contribute to immunosuppression in stressed chronic pain patients. Copyright (c) 2008 S. Karger AG, Basel
Health surveillance of deployed military personnel occasionally leads to unexpected findings
Post-traumatic stress disorder (PTSD) can be caused by life threatening illness, such as cancer and coronary events. The study by Forbes et al. made the unexpected finding that military personnel evacuation with medical illness have similar rates of PTSD to those evacuated with combat injuries. It may be that the illness acts as a nonspecific stressor that interacts with combat exposures to increase the risk of PTSD. Conversely, the inflammatory consequence of systemic illness may augment the effects to traumatic stress and facilitate the immunological abnormalities that are now being associated with PTSD and depression. The impact of the stress on cytokine systems and their role in the onset of PTSD demands further investigation. Military personnel evacuated due to physical illness require similar screening and monitoring for the risk of PTSD to those injured who are already known to be at high risk.Alexander C McFarlan
Readout of a quantum processor with high dynamic range Josephson parametric amplifiers
We demonstrate a high dynamic range Josephson parametric amplifier (JPA) in
which the active nonlinear element is implemented using an array of rf-SQUIDs.
The device is matched to the 50 environment with a Klopfenstein-taper
impedance transformer and achieves a bandwidth of 250-300 MHz, with input
saturation powers up to -95 dBm at 20 dB gain. A 54-qubit Sycamore processor
was used to benchmark these devices, providing a calibration for readout power,
an estimate of amplifier added noise, and a platform for comparison against
standard impedance matched parametric amplifiers with a single dc-SQUID. We
find that the high power rf-SQUID array design has no adverse effect on system
noise, readout fidelity, or qubit dephasing, and we estimate an upper bound on
amplifier added noise at 1.6 times the quantum limit. Lastly, amplifiers with
this design show no degradation in readout fidelity due to gain compression,
which can occur in multi-tone multiplexed readout with traditional JPAs.Comment: 9 pages, 8 figure
Resolving catastrophic error bursts from cosmic rays in large arrays of superconducting qubits
Scalable quantum computing can become a reality with error correction,
provided coherent qubits can be constructed in large arrays. The key premise is
that physical errors can remain both small and sufficiently uncorrelated as
devices scale, so that logical error rates can be exponentially suppressed.
However, energetic impacts from cosmic rays and latent radioactivity violate
both of these assumptions. An impinging particle ionizes the substrate,
radiating high energy phonons that induce a burst of quasiparticles, destroying
qubit coherence throughout the device. High-energy radiation has been
identified as a source of error in pilot superconducting quantum devices, but
lacking a measurement technique able to resolve a single event in detail, the
effect on large scale algorithms and error correction in particular remains an
open question. Elucidating the physics involved requires operating large
numbers of qubits at the same rapid timescales as in error correction, exposing
the event's evolution in time and spread in space. Here, we directly observe
high-energy rays impacting a large-scale quantum processor. We introduce a
rapid space and time-multiplexed measurement method and identify large bursts
of quasiparticles that simultaneously and severely limit the energy coherence
of all qubits, causing chip-wide failure. We track the events from their
initial localised impact to high error rates across the chip. Our results
provide direct insights into the scale and dynamics of these damaging error
bursts in large-scale devices, and highlight the necessity of mitigation to
enable quantum computing to scale
Measurement-Induced State Transitions in a Superconducting Qubit: Within the Rotating Wave Approximation
Superconducting qubits typically use a dispersive readout scheme, where a
resonator is coupled to a qubit such that its frequency is qubit-state
dependent. Measurement is performed by driving the resonator, where the
transmitted resonator field yields information about the resonator frequency
and thus the qubit state. Ideally, we could use arbitrarily strong resonator
drives to achieve a target signal-to-noise ratio in the shortest possible time.
However, experiments have shown that when the average resonator photon number
exceeds a certain threshold, the qubit is excited out of its computational
subspace, which we refer to as a measurement-induced state transition. These
transitions degrade readout fidelity, and constitute leakage which precludes
further operation of the qubit in, for example, error correction. Here we study
these transitions using a transmon qubit by experimentally measuring their
dependence on qubit frequency, average photon number, and qubit state, in the
regime where the resonator frequency is lower than the qubit frequency. We
observe signatures of resonant transitions between levels in the coupled
qubit-resonator system that exhibit noisy behavior when measured repeatedly in
time. We provide a semi-classical model of these transitions based on the
rotating wave approximation and use it to predict the onset of state
transitions in our experiments. Our results suggest the transmon is excited to
levels near the top of its cosine potential following a state transition, where
the charge dispersion of higher transmon levels explains the observed noisy
behavior of state transitions. Moreover, occupation in these higher energy
levels poses a major challenge for fast qubit reset
Overcoming leakage in scalable quantum error correction
Leakage of quantum information out of computational states into higher energy
states represents a major challenge in the pursuit of quantum error correction
(QEC). In a QEC circuit, leakage builds over time and spreads through
multi-qubit interactions. This leads to correlated errors that degrade the
exponential suppression of logical error with scale, challenging the
feasibility of QEC as a path towards fault-tolerant quantum computation. Here,
we demonstrate the execution of a distance-3 surface code and distance-21
bit-flip code on a Sycamore quantum processor where leakage is removed from all
qubits in each cycle. This shortens the lifetime of leakage and curtails its
ability to spread and induce correlated errors. We report a ten-fold reduction
in steady-state leakage population on the data qubits encoding the logical
state and an average leakage population of less than
throughout the entire device. The leakage removal process itself efficiently
returns leakage population back to the computational basis, and adding it to a
code circuit prevents leakage from inducing correlated error across cycles,
restoring a fundamental assumption of QEC. With this demonstration that leakage
can be contained, we resolve a key challenge for practical QEC at scale.Comment: Main text: 7 pages, 5 figure
Suppressing quantum errors by scaling a surface code logical qubit
Practical quantum computing will require error rates that are well below what
is achievable with physical qubits. Quantum error correction offers a path to
algorithmically-relevant error rates by encoding logical qubits within many
physical qubits, where increasing the number of physical qubits enhances
protection against physical errors. However, introducing more qubits also
increases the number of error sources, so the density of errors must be
sufficiently low in order for logical performance to improve with increasing
code size. Here, we report the measurement of logical qubit performance scaling
across multiple code sizes, and demonstrate that our system of superconducting
qubits has sufficient performance to overcome the additional errors from
increasing qubit number. We find our distance-5 surface code logical qubit
modestly outperforms an ensemble of distance-3 logical qubits on average, both
in terms of logical error probability over 25 cycles and logical error per
cycle ( compared to ). To investigate
damaging, low-probability error sources, we run a distance-25 repetition code
and observe a logical error per round floor set by a single
high-energy event ( when excluding this event). We are able
to accurately model our experiment, and from this model we can extract error
budgets that highlight the biggest challenges for future systems. These results
mark the first experimental demonstration where quantum error correction begins
to improve performance with increasing qubit number, illuminating the path to
reaching the logical error rates required for computation.Comment: Main text: 6 pages, 4 figures. v2: Update author list, references,
Fig. S12, Table I
The NANOGrav 15 yr Data Set: Search for Transverse Polarization Modes in the Gravitational-wave Background
Recently we found compelling evidence for a gravitational-wave background with Hellings and Downs (HD) correlations in our 15 yr data set. These correlations describe gravitational waves as predicted by general relativity, which has two transverse polarization modes. However, more general metric theories of gravity can have additional polarization modes, which produce different interpulsar correlations. In this work, we search the NANOGrav 15 yr data set for evidence of a gravitational-wave background with quadrupolar HD and scalar-transverse (ST) correlations. We find that HD correlations are the best fit to the data and no significant evidence in favor of ST correlations. While Bayes factors show strong evidence for a correlated signal, the data does not strongly prefer either correlation signature, with Bayes factors ∼2 when comparing HD to ST correlations, and ∼1 for HD plus ST correlations to HD correlations alone. However, when modeled alongside HD correlations, the amplitude and spectral index posteriors for ST correlations are uninformative, with the HD process accounting for the vast majority of the total signal. Using the optimal statistic, a frequentist technique that focuses on the pulsar-pair cross-correlations, we find median signal-to-noise ratios of 5.0 for HD and 4.6 for ST correlations when fit for separately, and median signal-to-noise ratios of 3.5 for HD and 3.0 for ST correlations when fit for simultaneously. While the signal-to-noise ratios for each of the correlations are comparable, the estimated amplitude and spectral index for HD are a significantly better fit to the total signal, in agreement with our Bayesian analysis
- …