12,957 research outputs found
Delayed-choice gedanken experiments and their realizations
The wave-particle duality dates back to Einstein's explanation of the
photoelectric effect through quanta of light and de Broglie's hypothesis of
matter waves. Quantum mechanics uses an abstract description for the behavior
of physical systems such as photons, electrons, or atoms. Whether quantum
predictions for single systems in an interferometric experiment allow an
intuitive understanding in terms of the particle or wave picture, depends on
the specific configuration which is being used. In principle, this leaves open
the possibility that quantum systems always either behave definitely as a
particle or definitely as a wave in every experimental run by a priori adapting
to the specific experimental situation. This is precisely what is tried to be
excluded by delayed-choice experiments, in which the observer chooses to reveal
the particle or wave character -- or even a continuous transformation between
the two -- of a quantum system at a late stage of the experiment. We review the
history of delayed-choice gedanken experiments, which can be traced back to the
early days of quantum mechanics. Then we discuss their experimental
realizations, in particular Wheeler's delayed choice in interferometric setups
as well as delayed-choice quantum erasure and entanglement swapping. The latter
is particularly interesting, because it elevates the wave-particle duality of a
single quantum system to an entanglement-separability duality of multiple
systems
A Simulation Perspective: Error Analysis in the Distributed Simulation of Continuous System
To construct a corresponding distributed system from a continuous system, the most convenient way is to partition the system into parts according to its topology and deploy the parts on separated nodes directly. However, system error will be introduced during this process because the computing pattern is changed from the sequential to the parallel. In this paper, the mathematical expression of the introduced error is studied. A theorem is proposed to prove that a distributed system preserving the stability property of the continuous system can be found if the system error is limited to be small enough. Then, the compositions of the system error are analyzed one by one and the complete expression is deduced, where the advancing step T in distributed environment is one of the key factors associated. At last, the general steps to determine the step T are given. The significance of this study lies in the fact that the maximum T can be calculated without exceeding the expected error threshold, and a larger T can reduce the simulation cost effectively without causing too much performance degradation compared to the original continuous system
Regularized binormal ROC method in disease classification using microarray data
BACKGROUND: An important application of microarrays is to discover genomic biomarkers, among tens of thousands of genes assayed, for disease diagnosis and prognosis. Thus it is of interest to develop efficient statistical methods that can simultaneously identify important biomarkers from such high-throughput genomic data and construct appropriate classification rules. It is also of interest to develop methods for evaluation of classification performance and ranking of identified biomarkers. RESULTS: The ROC (receiver operating characteristic) technique has been widely used in disease classification with low dimensional biomarkers. Compared with the empirical ROC approach, the binormal ROC is computationally more affordable and robust in small sample size cases. We propose using the binormal AUC (area under the ROC curve) as the objective function for two-sample classification, and the scaled threshold gradient directed regularization method for regularized estimation and biomarker selection. Tuning parameter selection is based on V-fold cross validation. We develop Monte Carlo based methods for evaluating the stability of individual biomarkers and overall prediction performance. Extensive simulation studies show that the proposed approach can generate parsimonious models with excellent classification and prediction performance, under most simulated scenarios including model mis-specification. Application of the method to two cancer studies shows that the identified genes are reasonably stable with satisfactory prediction performance and biologically sound implications. The overall classification performance is satisfactory, with small classification errors and large AUCs. CONCLUSION: In comparison to existing methods, the proposed approach is computationally more affordable without losing the optimality possessed by the standard ROC method
Poly[diaquaÂ(μ3-1H-benzimidazole-5,6-dicarboxylÂato-κ4 N 3:O 5,O 6:O 6′)magnesium(II)]
In the title complex, [Mg(C9H4N2O4)(H2O)2]n, the MgII atom is six-coordinated by one N and three O atoms from three different 1H-benzimidazole-5,6-dicarboxylÂate ligands and two O atoms from two water molÂecules, forming a slightly distorted octaÂhedral geometry. The ligand links the MgII centres into a three-dimensional network. Extensive N—H⋯O and O—H⋯O hydrogen bonds exist between the ligands and water molÂecules, stabilizing the crystal structure
Experimental quantum teleportation over a high-loss free-space channel
We present a high-fidelity quantum teleportation experiment over a high-loss
free-space channel between two laboratories. We teleported six states of three
mutually unbiased bases and obtained an average state fidelity of 0.82(1), well
beyond the classical limit of 2/3. With the obtained data, we tomographically
reconstructed the process matrices of quantum teleportation. The free-space
channel attenuation of 31 dB corresponds to the estimated attenuation regime
for a down-link from a low-earth-orbit satellite to a ground station. We also
discussed various important technical issues for future experiments, including
the dark counts of single-photon detectors, coincidence-window width etc. Our
experiment tested the limit of performing quantum teleportation with
state-of-the-art resources. It is an important step towards future
satellite-based quantum teleportation and paves the way for establishing a
worldwide quantum communication network
A Case Study on Air Combat Decision Using Approximated Dynamic Programming
As a continuous state space problem, air combat is difficult to be resolved by traditional dynamic programming (DP) with discretized state space. The approximated dynamic programming (ADP) approach is studied in this paper to build a high performance decision model for air combat in 1 versus 1 scenario, in which the iterative process for policy improvement is replaced by mass sampling from history trajectories and utility function approximating, leading to high efficiency on policy improvement eventually. A continuous reward function is also constructed to better guide the plane to find its way to "winner" state from any initial situation. According to our experiments, the plane is more offensive when following policy derived from ADP approach other than the baseline Min-Max policy, in which the "time to win" is reduced greatly but the cumulated probability of being killed by enemy is higher. The reason is analyzed in this paper
- …