54,229 research outputs found
Criteria for reliable entanglement quantification with finite data
We propose one and a half criteria for determining how many measurements are
needed to quantify entanglement reliably. We base these criteria on Bayesian
analysis of measurement results, and apply our methods to four-qubit
entanglement, but generalizations to more qubits are straightforward.Comment: >4
Acylsulfonamide safety-catch linker : promise and limitations for solid-phase oligosaccharide synthesis
Safety-catch linkers are useful for solid-phase oligosaccharide synthesis as they are orthogonal to many common protective groups. A new acylsulfonamide safety-catch linker was designed, synthesized and employed during glycosylations using an automated carbohydrate synthesizer. The analysis of the cleavage products revealed shortcomings for oligosaccharide synthesis
Entanglement verification with finite data
Suppose an experimentalist wishes to verify that his apparatus produces
entangled quantum states. A finite amount of data cannot conclusively
demonstrate entanglement, so drawing conclusions from real-world data requires
statistical reasoning. We propose a reliable method to quantify the weight of
evidence for (or against) entanglement, based on a likelihood ratio test. Our
method is universal in that it can be applied to any sort of measurements. We
demonstrate the method by applying it to two simulated experiments on two
qubits. The first measures a single entanglement witness, while the second
performs a tomographically complete measurement.Comment: 4 pages, 3 pretty picture
NMR Investigation of the Low Temperature Dynamics of solid 4He doped with 3He impurities
The lattice dynamics of solid 4He has been explored using pulsed NMR methods
to study the motion of 3He impurities in the temperature range where
experiments have revealed anomalies attributed to superflow or unexpected
viscoelastic properties of the solid 4He lattice. We report the results of
measurements of the nuclear spin-lattice and spin-spin relaxation times that
measure the fluctuation spectrum at high and low frequencies, respectively, of
the 3He motion that results from quantum tunneling in the 4He matrix. The
measurements were made for 3He concentrations 16<x_3<2000 ppm. For 3He
concentrations x_3 = 16 ppm and 24 ppm, large changes are observed for both the
spin-lattice relaxation time T_1 and the spin-spin relaxation time T_2 at
temperatures close to those for which the anomalies are observed in
measurements of torsional oscillator responses and the shear modulus. These
changes in the NMR relaxation rates were not observed for higher 3He
concentrations.Comment: 23 pages, 10 figure
Entanglement and purity of single- and two-photon states
Whereas single- and two-photon wave packets are usually treated as pure
states, in practice they will be mixed. We study how entanglement created with
mixed photon wave packets is degraded. We find in particular that the
entanglement of a delocalized single-photon state of the electro-magnetic field
is determined simply by its purity. We also discuss entanglement for two-photon
mixed states, as well as the influence of a vacuum component.Comment: 11 pages, 10 figures, 1 debuting autho
Isolated Galaxies versus Interacting Pairs with MaNGA
We present preliminary results of the spectral analysis on the radial
distributions of the star formation history in both, a galaxy merger and a
spiral isolated galaxy observed with MaNGA. We find that the central part of
the isolated galaxy is composed by older stellar population (2 Gyr) than
in the outskirts (7 Gyr). Also, the time-scale is gradually larger from 1
Gyr in the inner part to 3 Gyr in the outer regions of the galaxy. In the case
of the merger, the stellar population in the central region is older than in
the tails, presenting a longer time-scale in comparison to central part in the
isolated galaxy. Our results are in agreement with a scenario where spiral
galaxies are built from inside-out. In the case of the merger, we find evidence
that interactions enhance star formation in the central part of the galaxy.Comment: 7 pages, 2 figures. Proceedings of the EWASS-2015 special session
Sp3, accepted for publication in Special Issue "3D View on Interacting and
Post-Interacting Galaxies from Clusters to Voids" of open access journal
"Galaxies
Recommended from our members
A universal primer for isolation of fragments of a gene encoding phytoene desaturase for use in virus-induced gene silencing (VIGS) studies
We have been using Virus-Induced Gene Silencing (VIGS) to test the function of genes that are candidates for involvement in floral senescence. Although VIGS is a powerful tool for assaying the effects of gene silencing in plants, relatively few taxa have been studied using this approach, and most that have are in the Solanaceae. We typically use silencing of phytoene desaturase (PDS) in preliminary tests of the feasibility of using VIGS. Silencing this gene, whose product is involved in carotene biosynthesis, results in a characteristic photobleaching phenotype in the leaves. We have found that efficient silencing requires the use of fragments that are more than 90% homologous to the target gene. To simplify testing the effectiveness of VIGS in a range of species, we designed a set of universal primers to a region of the PDS gene that is highly conserved among species, and that therefore allows an investigator to isolate a fragment of the homologous PDS gene from the species of interest. We report the sequences of these primers and the results of VIGS experiments in horticultural species from the Asteraceae, Leguminosae, Balsaminaceae and Solanaceae
Recommended from our members
Sequence Classification Restricted Boltzmann Machines With Gated Units
For the classification of sequential data, dynamic Bayesian networks and recurrent neural networks (RNNs) are the preferred models. While the former can explicitly model the temporal dependences between the variables, and the latter have the capability of learning representations. The recurrent temporal restricted Boltzmann machine (RTRBM) is a model that combines these two features. However, learning and inference in RTRBMs can be difficult because of the exponential nature of its gradient computations when maximizing log likelihoods. In this article, first, we address this intractability by optimizing a conditional rather than a joint probability distribution when performing sequence classification. This results in the ``sequence classification restricted Boltzmann machine'' (SCRBM). Second, we introduce gated SCRBMs (gSCRBMs), which use an information processing gate, as an integration of SCRBMs with long short-term memory (LSTM) models. In the experiments reported in this article, we evaluate the proposed models on optical character recognition, chunking, and multiresident activity recognition in smart homes. The experimental results show that gSCRBMs achieve the performance comparable to that of the state of the art in all three tasks. gSCRBMs require far fewer parameters in comparison with other recurrent networks with memory gates, in particular, LSTMs and gated recurrent units (GRUs)
Incentivizing High Quality Crowdwork
We study the causal effects of financial incentives on the quality of
crowdwork. We focus on performance-based payments (PBPs), bonus payments
awarded to workers for producing high quality work. We design and run
randomized behavioral experiments on the popular crowdsourcing platform Amazon
Mechanical Turk with the goal of understanding when, where, and why PBPs help,
identifying properties of the payment, payment structure, and the task itself
that make them most effective. We provide examples of tasks for which PBPs do
improve quality. For such tasks, the effectiveness of PBPs is not too sensitive
to the threshold for quality required to receive the bonus, while the magnitude
of the bonus must be large enough to make the reward salient. We also present
examples of tasks for which PBPs do not improve quality. Our results suggest
that for PBPs to improve quality, the task must be effort-responsive: the task
must allow workers to produce higher quality work by exerting more effort. We
also give a simple method to determine if a task is effort-responsive a priori.
Furthermore, our experiments suggest that all payments on Mechanical Turk are,
to some degree, implicitly performance-based in that workers believe their work
may be rejected if their performance is sufficiently poor. Finally, we propose
a new model of worker behavior that extends the standard principal-agent model
from economics to include a worker's subjective beliefs about his likelihood of
being paid, and show that the predictions of this model are in line with our
experimental findings. This model may be useful as a foundation for theoretical
studies of incentives in crowdsourcing markets.Comment: This is a preprint of an Article accepted for publication in WWW
\c{opyright} 2015 International World Wide Web Conference Committe
- …