86 research outputs found

    Challenging local realism with human choices

    Get PDF
    A Bell test is a randomized trial that compares experimental observations against the philosophical worldview of local realism 1, in which the properties of the physical world are independent of our observation of them and no signal travels faster than light. A Bell test requires spatially distributed entanglement, fast and high-efficiency detection and unpredictable measurement settings 2,3 . Although technology can satisfy the first two of these requirements 4-7, the use of physical devices to choose settings in a Bell test involves making assumptions about the physics that one aims to test. Bell himself noted this weakness in using physical setting choices and argued that human 'free will' could be used rigorously to ensure unpredictability in Bell tests 8 . Here we report a set of local-realism tests using human choices, which avoids assumptions about predictability in physics. We recruited about 100,000 human participants to play an online video game that incentivizes fast, sustained input of unpredictable selections and illustrates Bell-test methodology 9 . The participants generated 97,347,490 binary choices, which were directed via a scalable web platform to 12 laboratories on five continents, where 13 experiments tested local realism using photons 5,6, single atoms 7, atomic ensembles 10 and superconducting devices 11 . Over a 12-hour period on 30 November 2016, participants worldwide provided a sustained data flow of over 1,000 bits per second to the experiments, which used different human-generated data to choose each measurement setting. The observed correlations strongly contradict local realism and other realistic positions in bipartite and tripartite 12 scenarios. Project outcomes include closing the 'freedom-of-choice loophole' (the possibility that the setting choices are influenced by 'hidden variables' to correlate with the particle properties 13 ), the utilization of video-game methods 14 for rapid collection of human-generated randomness, and the use of networking techniques for global participation in experimental science

    Challenging local realism with human choices

    Full text link
    A Bell test is a randomized trial that compares experimental observations against the philosophical worldview of local realism. A Bell test requires spatially distributed entanglement, fast and high-efficiency detection and unpredictable measurement settings. Although technology can satisfy the first two of these requirements, the use of physical devices to choose settings in a Bell test involves making assumptions about the physics that one aims to test. Bell himself noted this weakness in using physical setting choices and argued that human `free will' could be used rigorously to ensure unpredictability in Bell tests. Here we report a set of local-realism tests using human choices, which avoids assumptions about predictability in physics. We recruited about 100,000 human participants to play an online video game that incentivizes fast, sustained input of unpredictable selections and illustrates Bell-test methodology. The participants generated 97,347,490 binary choices, which were directed via a scalable web platform to 12 laboratories on five continents, where 13 experiments tested local realism using photons, single atoms, atomic ensembles, and superconducting devices. Over a 12-hour period on 30 November 2016, participants worldwide provided a sustained data flow of over 1,000 bits per second to the experiments, which used different human-generated data to choose each measurement setting. The observed correlations strongly contradict local realism and other realistic positions in bipartite and tripartite scenarios. Project outcomes include closing the `freedom-of-choice loophole' (the possibility that the setting choices are influenced by `hidden variables' to correlate with the particle properties), the utilization of video-game methods for rapid collection of human generated randomness, and the use of networking techniques for global participation in experimental science.Comment: This version includes minor changes resulting from reviewer and editorial input. Abstract shortened to fit within arXiv limit

    Validar a guerra: a construção do regime de Expertise estratégica

    Full text link
    This article is intended to contribute to the interpretative analysis of war. For that purpose, it investigates how some apparatuses located in strategic thinking help to make modern war a social practice considered both technically feasible and, at the same time, legitimate for soldiers. In so doing, it makes use of two different but closely related theoretical fields, pragmatic sociology (finding inspiration in the work of scholars such as Luc Boltanski, Nicolas Dodier and Francis Chateauraynaud), and the sociology of scientific knowledge (based mostly on the work of Bruno Latour). On the one hand, the sociology of scientific knowledge has developed a productive questioning of the construction of scientific facts that is particularly relevant to the present research. On the other hand, pragmatic sociology generates a compatible framework able to describe collective actions. The combination of both approaches allows the description of the formation of a strategic expertise regime that supports the technical legitimacy of the use of military force. Together, the sociology of scientific knowledge and pragmatic sociology bring a particularly relevant perspective to research pertaining to war.info:eu-repo/semantics/publishe

    Hemodynamic Response to Gabapentin in Conscious Spontaneously Hypertensive Rats

    No full text

    Considering new methodologies in strategies for safety assessment of foods and food ingredients

    Get PDF
    Toxicology and safety assessment are changing and require new strategies for evaluating risk that are less depending on apical toxicity endpoints in animal models and relying more on knowledge of the mechanism of toxicity. This manuscript describes a number of developments that could contribute to this change and implement this in a stepwise roadmap that can be applied for the evaluation of food and food ingredients. The roadmap was evaluated in four case studies by using literature and existing data. This preliminary evaluation was shown to be useful. However, this experience should be extended by including examples where experimental work needs to be included. To further implement these new insights in toxicology and safety assessment for the area of food and food ingredients, the recommendation is that stakeholders take action in addressing gaps in our knowledge, e.g. with regard to the applicability of the roadmap for mixtures and food matrices. Further development of the threshold of toxicological concern is needed, as well as cooperation with other sectors where similar schemes are under development. Moreover, a more comprehensive evaluation of the roadmap, also including the identification of the need for in vitro experimental work is recommended.publishe

    Normalization of pressure-natriuresis by nisoldipine in spontaneously hypertensive rats.

    No full text
    • …
    corecore