1,555 research outputs found
Discrimination and synthesis of recursive quantum states in high-dimensional Hilbert spaces
We propose an interferometric method for statistically discriminating between
nonorthogonal states in high dimensional Hilbert spaces for use in quantum
information processing. The method is illustrated for the case of photon
orbital angular momentum (OAM) states. These states belong to pairs of bases
that are mutually unbiased on a sequence of two-dimensional subspaces of the
full Hilbert space, but the vectors within the same basis are not necessarily
orthogonal to each other. Over multiple trials, this method allows
distinguishing OAM eigenstates from superpositions of multiple such
eigenstates. Variations of the same method are then shown to be capable of
preparing and detecting arbitrary linear combinations of states in Hilbert
space. One further variation allows the construction of chains of states
obeying recurrence relations on the Hilbert space itself, opening a new range
of possibilities for more abstract information-coding algorithms to be carried
out experimentally in a simple manner. Among other applications, we show that
this approach provides a simplified means of switching between pairs of
high-dimensional mutually unbiased OAM bases
Quantum simulation of topologically protected states using directionally unbiased linear-optical multiports
It is shown that quantum walks on one-dimensional arrays of special
linear-optical units allow the simulation of discrete-time Hamiltonian systems
with distinct topological phases. In particular, a slightly modified version of
the Su-Schrieffer-Heeger (SSH) system can be simulated, which exhibits states
of nonzero winding number and has topologically protected boundary states. In
the large-system limit this approach uses quadratically fewer resources to
carry out quantum simulations than previous linear-optical approaches and can
be readily generalized to higher-dimensional systems. The basic optical units
that implement this simulation consist of combinations of optical multiports
that allow photons to reverse direction
Quantum simulation of discrete-time Hamiltonians using directionally unbiased linear optical multiports
Recently, a generalization of the standard optical multiport was proposed [Phys. Rev. A 93, 043845 (2016)]. These directionally unbiased multiports allow photons to reverse direction and exit backwards from the input port, providing a realistic linear optical scattering vertex for quantum walks on arbitrary graph structures. Here, it is shown that arrays of these multiports allow the simulation of a range of discrete-time Hamiltonian systems. Examples are described, including a case where both spatial and internal degrees of freedom are simulated. Because input ports also double as output ports, there is substantial savings of resources compared to feed-forward networks carrying out the same functions. The simulation is implemented in a scalable manner using only linear optics, and can be generalized to higher dimensional systems in a straightforward fashion, thus offering a concrete experimentally achievable implementation of graphical models of discrete-time quantum systems.This research was supported by the National Science Foundation EFRI-ACQUIRE Grant No. ECCS-1640968, NSF Grant No. ECCS-1309209, and by the Northrop Grumman NG Next. (ECCS-1640968 - National Science Foundation EFRI-ACQUIRE Grant; ECCS-1309209 - NSF Grant; Northrop Grumman NG Next
Gz, a guanine nucleotide-binding protein with unique biochemical properties
Cloning of a complementary DNA (cDNA) for Gz alpha, a newly appreciated member of the family of guanine nucleotide-binding regulatory proteins (G proteins), has allowed preparation of specific antisera to identify the protein in tissues and to assay it during purification from bovine brain. Additionally, expression of the cDNA in Escherichia coli has resulted in the production and purification of the recombinant protein. Purification of Gz from bovine brain is tedious, and only small quantities of protein have been obtained. The protein copurifies with the beta gamma subunit complex common to other G proteins; another 26- kDa GTP-binding protein is also present in these preparations. The purified protein could not serve as a substrate for NAD-dependent ADP- ribosylation catalyzed by either pertussis toxin or cholera toxin. Purification of recombinant Gz alpha (rGz alpha) from E. coli is simple, and quantities of homogeneous protein sufficient for biochemical analysis are obtained. Purified rGz alpha has several properties that distinguish it from other G protein alpha subunit polypeptides. These include a very slow rate of guanine nucleotide exchange (k = 0.02 min^-1), which is reduced greater than 20-fold in the presence of mM concentrations of Mg2+. In addition, the rate of the intrinsic GTPase activity of Gz alpha is extremely slow. The hydrolysis rate (kcat) for rGz alpha at 30 degrees C is 0.05 min^-1, or 200-fold slower than that determined for other G protein alpha subunits. rGz alpha can interact with bovine brain beta gamma but does not serve as a substrate for ADP-ribosylation catalyzed by either pertussis toxin or cholera toxin. These studies suggest that Gz may play a role in signal transduction pathways that are mechanistically distinct from those controlled by the other members of the G protein family
Degradation and breakdown characteristics of thin MgO dielectric layers
MgO has been suggested as a possible high-k dielectric for future complementary metal-oxide semiconductor processes. In this work, the time dependent dielectric breakdown (TDDB) characteristics of 20 nm MgO films are discussed. Stress induced leakage current measurements indicate that the low measured Weibull slopes of the TDDB distributions for both n-type and p-type devices cannot be attributed to a lower trap generation rate than for SiO2. This suggests that much fewer defects are required to trigger breakdown in MgO under voltage stress than is the case for SiO2 or other metal-oxide dielectrics. This in turn explains the progressive nature of the breakdown in these films which is observed both in this work and elsewhere. The reason fewer defects are required is attributed to the morphology of the films
Beyond Mixing-length Theory: a step toward 321D
We examine the physical basis for algorithms to replace mixing-length theory
(MLT) in stellar evolutionary computations. Our 321D procedure is based on
numerical solutions of the Navier-Stokes equations. These implicit large eddy
simulations (ILES) are three-dimensional (3D), time-dependent, and turbulent,
including the Kolmogorov cascade. We use the Reynolds-averaged Navier-Stokes
(RANS) formulation to make concise the 3D simulation data, and use the 3D
simulations to give closure for the RANS equations. We further analyze this
data set with a simple analytical model, which is non-local and time-dependent,
and which contains both MLT and the Lorenz convective roll as particular
subsets of solutions. A characteristic length (the damping length) again
emerges in the simulations; it is determined by an observed balance between (1)
the large-scale driving, and (2) small-scale damping.
The nature of mixing and convective boundaries is analyzed, including
dynamic, thermal and compositional effects, and compared to a simple model.
We find that
(1) braking regions (boundary layers in which mixing occurs) automatically
appear {\it beyond} the edges of convection as defined by the Schwarzschild
criterion,
(2) dynamic (non-local) terms imply a non-zero turbulent kinetic energy flux
(unlike MLT),
(3) the effects of composition gradients on flow can be comparable to thermal
effects, and
(4) convective boundaries in neutrino-cooled stages differ in nature from
those in photon-cooled stages (different P\'eclet numbers).
The algorithms are based upon ILES solutions to the Navier-Stokes equations,
so that, unlike MLT, they do not require any calibration to astronomical
systems in order to predict stellar properties. Implications for solar
abundances, helioseismology, asteroseismology, nucleosynthesis yields,
supernova progenitors and core collapse are indicated.Comment: 22 pages, 4 figures, 2 tables; significantly re-written, critique of
Pasetto, et al. model added, accepted for publication by Ap
A Simple Theory of Complex Valuation
Complex valuations of assets, companies, government programs, damages, and the like cannot be done without expertise, yet judges routinely pick an arbitrary value that falls somewhere between the extreme numbers suggested by competing experts. This creates costly uncertainty and undermines the legitimacy of the court. Proposals to remedy this well-recognized difficulty have become increasingly convoluted. As a result, no solution has been effectively adopted and the problem persists. This Article suggests that the valuation dilemma stems from a misconception of the inquiry involved. Courts have treated valuation as its own special type of inquiry distinct from traditional fact-finding. We show that reintroducing fundamental principles of fact-finding can provide a simpler and more accurate method of complex valuation. Our conclusion rests on the premise that valuations are nothing more than exercises in routine fact-finding. Valuation is not an ethereal question with no right answer. Rather, valuation is a process of inferring the value that a relevant community places on an asset. This basic point has been ignored in practice and received almost no attention in the academy. Recognizing this foundational point can both restore the legitimacy of the process and reduce the costs of uncertainty and biased testimony. We demonstrate that a return to traditional evidentiary rules, including attention to burdens of proof, will discourage courts from resorting to ad hoc calculations and will encourage courts to arrive at valuations through vetted methodologies that are shown to be reasonably accurate and, most importantly, supported by the record. We further show that this will lead to an improvement in the quality of information provided by expert witnesses
- …