816 research outputs found

    Demonstration of non-Markovian process characterisation and control on a quantum processor

    Get PDF
    In the scale-up of quantum computers, the framework underpinning fault-tolerance generally relies on the strong assumption that environmental noise affecting qubit logic is uncorrelated (Markovian). However, as physical devices progress well into the complex multi-qubit regime, attention is turning to understanding the appearance and mitigation of correlated -- or non-Markovian -- noise, which poses a serious challenge to the progression of quantum technology. This error type has previously remained elusive to characterisation techniques. Here, we develop a framework for characterising non-Markovian dynamics in quantum systems and experimentally test it on multi-qubit superconducting quantum devices. Where noisy processes cannot be accounted for using standard Markovian techniques, our reconstruction predicts the behaviour of the devices with an infidelity of 10−310^{-3}. Our results show this characterisation technique leads to superior quantum control and extension of coherence time by effective decoupling from the non-Markovian environment. This framework, validated by our results, is applicable to any controlled quantum device and offers a significant step towards optimal device operation and noise reduction

    Cosmic Gravitational Shear from the HST Medium Deep Survey

    Full text link
    We present a measurement of cosmic shear on scales ranging from 10\arcsec to 2\arcmin in 347 WFPC2 images of random fields. Our result is based on shapes measured via image fitting and on a simple statistical technique; careful calibration of each step allows us to quantify our systematic uncertainties and to measure the cosmic shear down to very small angular scales. The WFPC2 images provide a robust measurement of the cosmic shear signal decreasing from 5.25.2% at 10\arcsec to 2.22.2% at 130\arcsec .Comment: 4 pages 2 Postscript figures, uses emulateapj.cls Astrophysical Journal Letters, December 1, 200

    Compact Nuclei in Moderately Redshifted Galaxies

    Get PDF
    The Hubble Space Telescope WFPC2 is being used to obtain high-resolution images in the V and I bands for several thousand distant galaxies as part of the Medium Deep Survey (MDS). An important scientific aim of the MDS is to identify possible AGN candidates from these images in order to measure the faint end of the AGN luminosity function as well as to study the host galaxies of AGNs and nuclear starburst systems. We are able to identify candidate objects based on morphology. Candidates are selected by fitting bulge+disk models and bulge+disk+point source nuclei models to HST imaged galaxies and determining the best model fit to the galaxy light profile. We present results from a sample of MDS galaxies with I less than 21.5 mag that have been searched for AGN/starburst nuclei in this manner. We identify 84 candidates with unresolved nuclei in a sample of 825 galaxies. For the expected range of galaxy redshifts, all normal bulges are resolved. Most of the candidates are found in galaxies displaying exponential disks with some containing an additional bulge component. 5% of the hosts are dominated by an r^-1/4 bulge. The V-I color distribution of the nuclei is consistent with a dominant population of Seyfert-type nuclei combined with an additional population of starbursts. Our results suggest that 10% +/- 1% of field galaxies at z less than 0.6 may contain AGN/starburst nuclei that are 1 to 5 magnitudes fainter than the host galaxies.Comment: 12 pages AASTeX manuscript, 3 separate Postscript figures, to be published in ApJ Letter

    Compact Nuclei in Galaxies at Moderate Redshift: I. Imaging and Spectroscopy

    Full text link
    This study explores the space density and properties of active galaxies to z=0.8. We have investigated the frequency and nature of unresolved nuclei in galaxies at moderate redshift as indicators of nuclear activity such as Active Galactic Nuclei (AGN) or starbursts. Candidates are selected by fitting imaged galaxies with multi-component models using maximum likelihood estimate techniques to determine the best model fit. We select those galaxies requiring an unresolved, point source component in the galaxy nucleus, in addition to a disk and/or bulge component, to adequately model the galaxy light. We have searched 70 WFPC2 images primarily from the Medium Deep Survey for galaxies containing compact nuclei. In our survey of 1033 galaxies, the fraction containing an unresolved nuclear component greater than 3% of the total galaxy light is 16+/-3% corrected for incompleteness and 9+/-1% for nuclei greater than 5% of the galaxy light. Spectroscopic redshifts have been obtained for 35 of our AGN/starburst candidates and photometric redshifts are estimated to an accuracy of sigma_z=0.1 for the remaining sample. In this paper, the first of two in this series, we present the selected HST imaged galaxies having unresolved nuclei and discuss the selection procedure. We also present the ground-based spectroscopy for these galaxies as well as the photometric redshifts estimated for those galaxies without spectra.Comment: 56 pages, 22 figures, to appear in ApJ Supplement Series, April 199

    Filtering crosstalk from bath non-Markovianity via spacetime classical shadows

    Full text link
    From an open system perspective non-Markovian effects due to a nearby bath or neighbouring qubits are dynamically equivalent. However, there is a conceptual distinction to account for: neighbouring qubits may be controlled. We combine recent advances in non-Markovian quantum process tomography with the framework of classical shadows to characterise spatiotemporal quantum correlations. Observables here constitute operations applied to the system, where the free operation is the maximally depolarising channel. Using this as a causal break, we systematically erase causal pathways to narrow down the progenitors of temporal correlations. We show that one application of this is to filter out the effects of crosstalk and probe only non-Markovianity from an inaccessible bath. It also provides a lens on spatiotemporally spreading correlated noise throughout a lattice from common environments. We demonstrate both examples on synthetic data. Owing to the scaling of classical shadows, we can erase arbitrarily many neighbouring qubits at no extra cost. Our procedure is thus efficient and amenable to systems even with all-to-all interactions.Comment: 5 pages, 4 figure

    Combining computational fluid dynamics and neural networks to characterize microclimate extremes: Learning the complex interactions between meso-climate and urban morphology

    Get PDF
    The urban form and extreme microclimate events can have an important impact on the energy performance of buildings, urban comfort and human health. State-of-the-art building energy simulations require information on the urban microclimate, but typically rely on ad-hoc numerical simulations, expensive in-situ measurements, or data from nearby weather stations. As such, they do not account for the full range of possible urban microclimate variability and findings cannot be generalized across urban morphologies. To bridge this knowledge gap, this study proposes two data-driven models to downscale climate variables from the meso to the micro scale in arbitrary urban morphologies, with a focus on extreme climate conditions. The models are based on a feedforward and a deep neural network (NN) architecture, and are trained using results from computational fluid dynamics (CFD) simulations of flow over a series of idealized but representative urban environments, spanning a realistic range of urban morphologies. Both models feature a relatively good agreement with corresponding CFD training data, with a coefficient of determination R2 = 0.91 (R2 = 0.89) and R2 = 0.94 (R2 = 0.92) for spatially-distributed wind magnitude and air temperature for the deep NN (feedforward NN). The models generalize well for unseen urban morphologies and mesoscale input data that are within the training bounds in the parameter space, with a R2 = 0.74 (R2 = 0.69) and R2 = 0.81 (R2 = 0.74) for wind magnitude and air temperature for the deep NN (feedforward NN). The accuracy and efficiency of the proposed CFD-NN models makes them well suited for the design of climate-resilient buildings at the early design stage

    Hydrographic Surveys at Seven Chutes and Three Backwaters on the Missouri River in Nebraska, Iowa, and Missouri, 2011-13

    Get PDF
    The United States Geological Survey (USGS) cooperated with the United States Army Corps of Engineers (USACE), Omaha District, to complete hydrographic surveys of seven chutes and three backwaters on the Missouri River yearly during 2011–13. These chutes and backwaters were constructed by the USACE to increase the amount of available shallow water habitat (SWH) to support threatened and endangered species, as required by the amended “2000 Biological Opinion” on the operation of the Missouri River main-stem reservoir system. Chutes surveyed included Council chute, Plattsmouth chute, Tobacco chute, Upper Hamburg chute, Lower Hamburg chute, Kansas chute, and Deroin chute. Backwaters surveyed included Ponca backwater, Plattsmouth backwater, and Langdon backwater. Hydrographic data from these chute and backwater surveys will aid the USACE to assess the current (2011–13) amount of available SWH, the effects river flow have had on evolving morphology of the chutes and backwaters, and the functionality of the chute and backwater designs. Chutes and backwaters were surveyed from August through November 2011, June through November 2012, and May through October 2013. During the 2011 surveys, high water was present at all sites because of the major flooding on the Missouri River. The hydrographic survey data are published along with this report in comma-separated-values (csv) format with associated metadata.Hydrographic surveys included bathymetric and Real-Time Kinematic Global Navigation Satellite System surveys. Hydrographic data were collected along transects extending across the channel from top of bank to top of bank. Transect segments with water depths greater than 1 meter were surveyed using a single-beam echosounder to measure depth and a differentially corrected global positioning system to measure location. These depth soundings were converted to elevation using water-surface-elevation information collected with a Real-Time Kinematic Global Navigation Satellite System. Transect segments with water depths less than 1 meter were surveyed using Real-Time Kinematic Global Navigation Satellite Systems. Surveyed features included top of bank, toe of bank, edge of water, sand bars, and near-shore areas.Discharge was measured at chute survey sites, in both the main channel of the Missouri River upstream from the chute and the chute. Many chute entrances and control structures were damaged by floodwater during the 2011 Missouri River flood, allowing a larger percentage of the total Missouri River discharge to flow through the chute than originally intended in the chute design. Measured discharge split between the main channel and the chute at most chutes was consistent with effects of the 2011 Missouri River flood damages and a larger percent of the total Missouri River discharge was flowing through the chute than originally intended. The US Army Corps of Engineers repaired many of these chutes in 2012 and 2013, and the resulting hydraulic changes are reflected in the discharge splits

    On the sampling complexity of open quantum systems

    Full text link
    Open quantum systems are ubiquitous in the physical sciences, with widespread applications in the areas of chemistry, condensed matter physics, material science, optics, and many more. Not surprisingly, there is significant interest in their efficient simulation. However, direct classical simulation quickly becomes intractable with coupling to an environment whose effective dimension grows exponentially. This raises the question: can quantum computers help model these complex dynamics? A first step in answering this question requires understanding the computational complexity of this task. Here, we map the temporal complexity of a process to the spatial complexity of a many-body state using a computational model known as the process tensor framework. With this, we are able to explore the simulation complexity of an open quantum system as a dynamic sampling problem: a system coupled to an environment can be probed at successive points in time -- accessing multi-time correlations. The complexity of multi-time sampling, which is an important and interesting problem in its own right, contains the complexity of master equations and stochastic maps as a special case. Our results show how the complexity of the underlying quantum stochastic process corresponds to the complexity of the associated family of master equations for the dynamics. We present both analytical and numerical examples whose multi-time sampling is as complex as sampling from a many-body state that is classically hard. This also implies that the corresponding family of master equations are classically hard. Our results pave the way for studying open quantum systems from a complexity-theoretic perspective, highlighting the role quantum computers will play in our understanding of quantum dynamics

    Luminosity Functions of Elliptical Galaxies at z < 1.2

    Get PDF
    The luminosity functions of E/S0 galaxies are constructed in 3 different redshift bins (0.2 < z < 0.55, 0.55 < z < 0.8, 0.8 < z < 1.2), using the data from the Hubble Space Telescope Medium Deep Survey (HST MDS) and other HST surveys. These independent luminosity functions show the brightening in the luminosity of E/S0s by about 0.5~1.0 magnitude at z~1, and no sign of significant number evolution. This is the first direct measurement of the luminosity evolution of E/S0 galaxies, and our results support the hypothesis of a high redshift of formation (z > 1) for elliptical galaxies, together with weak evolution of the major merger rate at z < 1.Comment: To be published in ApJ Letters, 4 pages, AAS Latex, 4 figures, and 2 table

    The Morphologically Divided Redshift Distribution of Faint Galaxies

    Get PDF
    We have constructed a morphologically divided redshift distribution of faint field galaxies using a statistically unbiased sample of 196 galaxies brighter than I = 21.5 for which detailed morphological information (from the Hubble Space Telescope) as well as ground-based spectroscopic redshifts are available. Galaxies are classified into 3 rough morphological types according to their visual appearance (E/S0s, Spirals, Sdm/dE/Irr/Pec's), and redshift distributions are constructed for each type. The most striking feature is the abundance of low to moderate redshift Sdm/dE/Irr/Pec's at I < 19.5. This confirms that the faint end slope of the luminosity function (LF) is steep (alpha < -1.4) for these objects. We also find that Sdm/dE/Irr/Pec's are fairly abundant at moderate redshifts, and this can be explained by strong luminosity evolution. However, the normalization factor (or the number density) of the LF of Sdm/dE/Irr/Pec's is not much higher than that of the local LF of Sdm/dE/Irr/Pec's. Furthermore, as we go to fainter magnitudes, the abundance of moderate to high redshift Irr/Pec's increases considerably. This cannot be explained by strong luminosity evolution of the dwarf galaxy populations alone: these Irr/Pec's are probably the progenitors of present day ellipticals and spiral galaxies which are undergoing rapid star formation or merging with their neighbors. On the other hand, the redshift distributions of E/S0s and spirals are fairly consistent those expected from passive luminosity evolution, and are only in slight disagreement with the non-evolving model.Comment: 11 pages, 4 figures (published in ApJ
    • 

    corecore