414,154 research outputs found

    La valeur de l'incertitude : l'évaluation de la précision des mesures physiques et les limites de la connaissance expérimentale

    Get PDF
    Abstract : A measurement result is never absolutely accurate: it is affected by an unknown “measurement error” which characterizes the discrepancy between the obtained value and the “true value” of the quantity intended to be measured. As a consequence, to be acceptable a measurement result cannot take the form of a unique numerical value, but has to be accompanied by an indication of its “measurement uncertainty”, which enunciates a state of doubt. What, though, is the value of measurement uncertainty? What is its numerical value: how does one calculate it? What is its epistemic value: how one should interpret a measurement result? Firstly, we describe the statistical models that scientists make use of in contemporary metrology to perform an uncertainty analysis, and we show that the issue of the interpretation of probabilities is vigorously debated. This debate brings out epistemological issues about the nature and function of physical measurements, metrologists insisting in particular on the subjective aspect of measurement. Secondly, we examine the philosophical elaboration of metrologists in their technical works, where they criticize the use of the notion of “true value” of a physical quantity. We then challenge this elaboration and defend such a notion. The third part turns to a specific use of measurement uncertainty in order to address our thematic from the perspective of precision physics, considering the activity of the adjustments of physical constants. In the course of this activity, physicists have developed a dynamic conception of the accuracy of their measurement results, oriented towards a future progress of knowledge, and underlining the epistemic virtues of a never-ending process of identification and correction of measurement errors

    Measuring is more than assigning numbers

    Get PDF
    'Measurement is fundamental to research-related activities in social science (hence this Handbook). In my own field of education research, perhaps the most discussed element of education lies in test scores. Examination results are measurements, the number of students attaining a particular standard in a test is a measurement; indeed the standard of a test is a measurement. The allocation of places at school, college or university, student:teacher ratios, funding plans, school timetables, staff workloads, adult participation rates, and the stratification of educational outcomes by sex, social class, ethnicity or geography for example, are all based on measurements. Good and careful work has been done in all of these areas (Nuttall 1987). However, the concept of measurement itself remains under-examined, and is often treated in an uncritical way. In saying this I mean more than the usual lament about qualitative:quantitative schism or the supposed reluctance of social scientists to engage with numeric analysis (Gorard et al. 2004a). I mean that even where numeric analysis is being conducted, the emphasis is on collecting, collating, analysing, and reporting the kinds of data generated by measurement, with the process of measurement and the rigor of the measurement instrument being somewhat taken for granted by many commentators. Issues that are traditionally considered by social scientists include levels of measurement, reliability, validity, and the creation of complex indices (as illustrated in some of the chapters contained in this volume). But these matters are too often dealt with primarily as technical matters – such as how to assess reliability or which statistical test to use with which combination of levels of measurement. The process of quantification itself is just assumed'

    Quantum mechanics: Myths and facts

    Get PDF
    A common understanding of quantum mechanics (QM) among students and practical users is often plagued by a number of "myths", that is, widely accepted claims on which there is not really a general consensus among experts in foundations of QM. These myths include wave-particle duality, time-energy uncertainty relation, fundamental randomness, the absence of measurement-independent reality, locality of QM, nonlocality of QM, the existence of well-defined relativistic QM, the claims that quantum field theory (QFT) solves the problems of relativistic QM or that QFT is a theory of particles, as well as myths on black-hole entropy. The fact is that the existence of various theoretical and interpretational ambiguities underlying these myths does not yet allow us to accept them as proven facts. I review the main arguments and counterarguments lying behind these myths and conclude that QM is still a not-yet-completely-understood theory open to further fundamental research.Comment: 51 pages, pedagogic review, revised, new references, to appear in Found. Phy

    Certainty in Heisenberg's uncertainty principle: Revisiting definitions for estimation errors and disturbance

    Get PDF
    We revisit the definitions of error and disturbance recently used in error-disturbance inequalities derived by Ozawa and others by expressing them in the reduced system space. The interpretation of the definitions as mean-squared deviations relies on an implicit assumption that is generally incompatible with the Bell-Kochen-Specker-Spekkens contextuality theorems, and which results in averaging the deviations over a non-positive-semidefinite joint quasiprobability distribution. For unbiased measurements, the error admits a concrete interpretation as the dispersion in the estimation of the mean induced by the measurement ambiguity. We demonstrate how to directly measure not only this dispersion but also every observable moment with the same experimental data, and thus demonstrate that perfect distributional estimations can have nonzero error according to this measure. We conclude that the inequalities using these definitions do not capture the spirit of Heisenberg's eponymous inequality, but do indicate a qualitatively different relationship between dispersion and disturbance that is appropriate for ensembles being probed by all outcomes of an apparatus. To reconnect with the discussion of Heisenberg, we suggest alternative definitions of error and disturbance that are intrinsic to a single apparatus outcome. These definitions naturally involve the retrodictive and interdictive states for that outcome, and produce complementarity and error-disturbance inequalities that have the same form as the traditional Heisenberg relation.Comment: 15 pages, 8 figures, published versio

    The role of matter density uncertainties in the analysis of future neutrino factory experiments

    Full text link
    Matter density uncertainties can affect the measurements of the neutrino oscillation parameters at future neutrino factory experiments, such as the measurements of the mixing parameters θ13\theta_{13} and \deltacp. We compare different matter density uncertainty models and discuss the possibility to include the matter density uncertainties in a complete statistical analysis. Furthermore, we systematically study in which measurements and where in the parameter space matter density uncertainties are most relevant. We illustrate this discussion with examples that show the effects as functions of different magnitudes of the matter density uncertainties. We find that matter density uncertainties are especially relevant for large \stheta \gtrsim 10^{-3}. Within the KamLAND-allowed range, they are most relevant for the precision measurements of \stheta and \deltacp, but less relevant for ``binary'' measurements, such as for the sign of \ldm, the sensitivity to \stheta, or the sensitivity to maximal CP violation. In addition, we demonstrate that knowing the matter density along a specific baseline better than to about 1% precision means that all measurements will become almost independent of the matter density uncertainties.Comment: 21 pages, 7 figures, LaTeX. Final version to be published in Phys. Rev.
    • …
    corecore