this paper is largely concerned with the design rather than the analysis of experiments in quantum physics, and there is still a gap between what is theoretically possible under the laws of quantum mechanics, and what is practically possible in the laboratory, though this gap is closing fast. `Information' is understood throughout in the sense it has in mathematical statistics. We do not discuss quantum information theory in the sense of optimal coding and transmission of messages through quantum communication channels, nor in the more general sense of quantum information processing (Green, 2000). Within quantum statistics, we concentrate on the topics of estimation and of inference. The classic books of Helstrom (1976) and Holevo (1982) are on the other hand largely devoted to a decision theoretic approach to hypothesis testing problems. See Parthasarathy (1999) and Ogawa and Nagaoka (2000) for recent contributions to this field. Confusingly, the phrase `maximum likelihood estimator' has an unorthodox meaning in the older literature. In many papers of which we just mention a few recent ones, Belavkin (1994, 2000, 2001) develops a continuous time Bayesian filtering approach to estimation and control. It should be emphasised from the start, that we see quantum mechanics as describing classical probability models for the outcomes of laboratory experiments, or indeed, for the real world outcomes of any interactions between `the quantum world' of microscopic particles and `the real world' in which statisticians analyse data. Those probability models may depend on unknown parameters, and quantum statistics is concerned with statistical design and inference concerning those parameters. This point of view is commonplace in experimental quantum physics but seems to be less comm..
To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.