26 research outputs found
Einstein-Podolsky-Rosen-Bohm experiments: a discrete data driven approach
We take the point of view that building a one-way bridge from experimental
data to mathematical models instead of the other way around avoids running into
controversies resulting from attaching meaning to the symbols used in the
latter. In particular, we show that adopting this view offers new perspectives
for constructing mathematical models for and interpreting the results of
Einstein-Podolsky-Rosen-Bohm experiments. We first prove new Bell-type
inequalities constraining the values of the four correlations obtained by
performing Einstein-Podolsky-Rosen-Bohm experiments under four different
conditions. The proof is ``model-free'' in the sense that it does not refer to
any mathematical model that one imagines to have produced the data. The
constraints only depend on the number of quadruples obtained by reshuffling the
data in the four data sets without changing the values of the correlations.
These new inequalities reduce to model-free versions of the well-known
Bell-type inequalities if the maximum fraction of quadruples is equal to one.
Being model-free, a violation of the latter by experimental data implies that
not all the data in the four data sets can be reshuffled to form quadruples.
Furthermore, being model-free inequalities, a violation of the latter by
experimental data only implies that any mathematical model assumed to produce
this data does not apply. Starting from the data obtained by performing
Einstein-Podolsky-Rosen-Bohm experiments, we construct instead of postulate
mathematical models that describe the main features of these data. The
mathematical framework of plausible reasoning is applied to reproducible and
robust data, yielding without using any concept of quantum theory, the
expression of the correlation for a system of two spin-1/2 objects in the
singlet state. (truncated here
Model-free inequality for data of Einstein-Podolsky-Rosen-Bohm experiments
We present a new inequality constraining correlations obtained when
performing Einstein-Podolsky-Rosen-Bohm experiments. The proof does not rely on
mathematical models that are imagined to have produced the data and is
therefore ``model-free''. The new inequality contains the model-free version of
the well-known Bell-CHSH inequality as a special case. A violation of the
latter implies that not all the data pairs in four data sets can be reshuffled
to create quadruples. This conclusion provides a new perspective on the
implications of the violation of Bell-type inequalities by experimental data.Comment: Extended version of Annals of Physics, Volume 453, 169314, 2023
(https://doi.org/10.1016/j.aop.2023.169314
Einstein–Podolsky–Rosen–Bohm experiments:A discrete data driven approach
We take the point of view that building a one-way bridge from experimental data to mathematical models instead of the other way around avoids running into controversies resulting from attaching meaning to the symbols used in the latter. In particular, we show that adopting this view offers new perspectives for constructing mathematical models for and interpreting the results of Einstein–Podolsky–Rosen–Bohm experiments. We first prove new Bell-type inequalities constraining the values of the four correlations obtained by performing Einstein–Podolsky–Rosen–Bohm experiments under four different conditions. The proof is “model-free” in the sense that it does not refer to any mathematical model that one imagines to have produced the data. The constraints only depend on the number of quadruples obtained by reshuffling the data in the four data sets without changing the values of the correlations. These new inequalities reduce to model-free versions of the well-known Bell-type inequalities if the maximum fraction of quadruples is equal to one. Being model-free, a violation of the latter by experimental data implies that not all the data in the four data sets can be reshuffled to form quadruples. Furthermore, being model-free inequalities, a violation of the latter by experimental data only implies that any mathematical model assumed to produce this data does not apply. Starting from the data obtained by performing Einstein–Podolsky–Rosen–Bohm experiments, we construct instead of postulate mathematical models that describe the main features of these data. The mathematical framework of plausible reasoning is applied to reproducible and robust data, yielding without using any concept of quantum theory, the expression of the correlation for a system of two spin-1/2 objects in the singlet state. Next, we apply Bell's theorem to the Stern–Gerlach experiment and demonstrate how the requirement of separability leads to the quantum-theoretical description of the averages and correlations obtained from an Einstein–Podolsky–Rosen–Bohm experiment. We analyze the data of an Einstein–Podolsky–Rosen–Bohm experiment and debunk the popular statement that Einstein–Podolsky–Rosen–Bohm experiments have vindicated quantum theory. We argue that it is not quantum theory but the processing of data from EPRB experiments that should be questioned. We perform Einstein–Podolsky–Rosen–Bohm experiments on a superconducting quantum information processor to show that the event-by-event generation of discrete data can yield results that are in good agreement with the quantum-theoretical description of the Einstein–Podolsky–Rosen–Bohm thought experiment. We demonstrate that a stochastic and a subquantum model can also produce data that are in excellent agreement with the quantum-theoretical description of the Einstein–Podolsky–Rosen–Bohm thought experiment.</p
Einstein–Podolsky–Rosen–Bohm experiments:A discrete data driven approach
We take the point of view that building a one-way bridge from experimental data to mathematical models instead of the other way around avoids running into controversies resulting from attaching meaning to the symbols used in the latter. In particular, we show that adopting this view offers new perspectives for constructing mathematical models for and interpreting the results of Einstein–Podolsky–Rosen–Bohm experiments. We first prove new Bell-type inequalities constraining the values of the four correlations obtained by performing Einstein–Podolsky–Rosen–Bohm experiments under four different conditions. The proof is “model-free” in the sense that it does not refer to any mathematical model that one imagines to have produced the data. The constraints only depend on the number of quadruples obtained by reshuffling the data in the four data sets without changing the values of the correlations. These new inequalities reduce to model-free versions of the well-known Bell-type inequalities if the maximum fraction of quadruples is equal to one. Being model-free, a violation of the latter by experimental data implies that not all the data in the four data sets can be reshuffled to form quadruples. Furthermore, being model-free inequalities, a violation of the latter by experimental data only implies that any mathematical model assumed to produce this data does not apply. Starting from the data obtained by performing Einstein–Podolsky–Rosen–Bohm experiments, we construct instead of postulate mathematical models that describe the main features of these data. The mathematical framework of plausible reasoning is applied to reproducible and robust data, yielding without using any concept of quantum theory, the expression of the correlation for a system of two spin-1/2 objects in the singlet state. Next, we apply Bell's theorem to the Stern–Gerlach experiment and demonstrate how the requirement of separability leads to the quantum-theoretical description of the averages and correlations obtained from an Einstein–Podolsky–Rosen–Bohm experiment. We analyze the data of an Einstein–Podolsky–Rosen–Bohm experiment and debunk the popular statement that Einstein–Podolsky–Rosen–Bohm experiments have vindicated quantum theory. We argue that it is not quantum theory but the processing of data from EPRB experiments that should be questioned. We perform Einstein–Podolsky–Rosen–Bohm experiments on a superconducting quantum information processor to show that the event-by-event generation of discrete data can yield results that are in good agreement with the quantum-theoretical description of the Einstein–Podolsky–Rosen–Bohm thought experiment. We demonstrate that a stochastic and a subquantum model can also produce data that are in excellent agreement with the quantum-theoretical description of the Einstein–Podolsky–Rosen–Bohm thought experiment.</p
Quantum annealing and its variants: Application to quadratic unconstrained binary optimization
In this thesis, we study the performance of the numerical implementation of quantum annealing, as well as of physical quantum annealing systems from D-Wave Quantum Systems Inc., for solving 2-Satisfiability (2-SAT) and other quadratic unconstrained binaryoptimization (QUBO) problems. For gauging the suitability of quantum annealing for solving these problems, we use three main metrics: the probability of the algorithm to solve the problem, its ability to find all the solutions to the problem if the problem has more than one solution, and the scaling of the time to solution as a function of the problem size. In doing so, we compare the performance of the numerically simulated ideal quantum annealing with its actual physical realization. We find that the ideal, standard quantum annealing algorithm can solve the sets of 2-SAT problems considered in this work, even if with a low success probability for hard problems, and can sample the degenerate ground states of the 2-SAT problems with multiple satisfying assignments in accordance with perturbation theory. However, in the long annealing time limit, the ideal standard annealing algorithm leads to a scaling of the time to solution that is worse compared to even the simple enumeration of all the possible states. On the other hand, we find noise and temperature effects to play an active role in the evolution of the state of the system on the D-Wave quantum annealers. These systems can solve a majority of the studied problems with a relatively large success probability, and the scaling of the time to solution, though still growing exponentially in the system size, is significantly improved. Next, by means of simulations, we introduce two modifications in the standard quantum annealing algorithm, and gauge the performance of the modified algorithms. These modifications are the addition of a trigger Hamiltonian to the standard quantum annealing Hamiltonian, or a change in the initial Hamiltonian of the annealing Hamiltonian. We choose the trigger Hamiltonian to have either ferromagnetic or antiferromagnetic transverse couplings, while the additional higher-order couplings added to the typically chosen initial Hamiltonian are ferromagnetic. We find that these modifications can lead to significant improvements in the performance of the annealing algorithm, even if the scaling behavior is still exponential