130 research outputs found

    Efficient single photon collection for single atom quantum nodes

    Get PDF

    Moving Forward in Human Cancer Risk Assessment

    Get PDF
    The goal of human risk assessment is to decide whether a given exposure level to a particular chemical or substance is acceptable to human health, and to provide risk management measures based on an evaluation and prediction of the effects of that exposure on human health. Within this framework, the current safety paradigm for assessing possible carcinogenic properties of drugs, cosmetics, industrial chemicals and environmental exposures relies mainly on in vitro genotoxicity testing followed by 2-year bioassays in mice and rats. This testing paradigm was developed 40 to 50 years ago with the initial premise that ¿mutagens are also carcinogens¿ and that the carcinogenic risk to humans can be extrapolated from the tumor incidence after lifetime exposure to maximally tolerated doses of chemicals in rodents. Genotoxicity testing is used as a surrogate for carcinogenicity testing and is required for initiation of clinical trials (Jacobs and Jacobson-Kram 2004) and for most industrial chemicals safety assessment. Although the carcinogenicity-testing paradigm has effectively protected patients and consumers from introduction of harmful carcinogens as drugs and other products, the testing paradigm is clearly not sustainable in the future. The causal link between genetic damage and carcinogenicity is well documented; however, the limitations of genotoxicity/carcinogenicity testing assays, the presence of additional non-genotoxic mechanisms, issues of species-specific effects, and the lack of mechanistic insights provide an enormous scientific challenge. The 2-year rodent carcinogenicity bioassays are associated with technical complexity, high costs, high animal burden as well as the uncertainty associated with extrapolating from rodents to humans. Additional frustrations exist because of the limited predictability of the 2-year bioassay and, in particular, with regard to the problem of the prediction of false positives. For instance, in the Carcinogenic Potency Project DataBase (CPDB) which includes results from chronic, long-term animal cancer tests with mice, rats, hamsters amounting to a total of 6540 individual experiments with 1547 chemicals, 751 of those chemicals or 51% have positive findings in rodent studies. Similarly, when one considers all chronically used human pharmaceuticals, some 50% induce tumors in rodents. Yet only 20 human pharmaceutical compounds have been identified as carcinogens in epidemiological studies, despite the fact that quite a large number of epidemiological studies have been carried out on these compounds, e.g. NSAID¿s, benzodiazepines, phenobarbital. This high incidence of tumors in bioassays has lead to questions concerning the human relevance of tumors induced in rodents (Knight et al. 2006; Ward 2008). In summary, dependency on the rodent model as a golden standard of cancer risk assessment is neglecting the high number of false positives and clearly has serious limitations. Consequently, there is a growing appeal for a paradigm change after "50 years of rats and mice". For instance, the current demands for volume of carcinogenic testing together with limitations of animal usage as initially stipulated by REACH (Combes et al. 2006) will require revolutionary change in the testing paradigm. For the purpose of developing a road map for this needed paradigm change in carcinogenicity testing, a workshop was held in August 2009 in Venice, Italy entitled ¿Genomics in Cancer Risk Assessment.¿ This workshop brought together toxicologists from academia and industry with governmental regulators and risk assessors from the US and the EU, for discussing the state-of-the-art in developing alternative testing strategies for genotoxicity and carcinogenicity, thereby focusing on the contribution from the ¿omics technologies. What follows is a highlight of the major conclusions and suggestions from this workshop as a path forward.JRC.DG.I.3-In-vitro method

    Challenging local realism with human choices

    Get PDF
    A Bell test is a randomized trial that compares experimental observations against the philosophical worldview of local realism 1, in which the properties of the physical world are independent of our observation of them and no signal travels faster than light. A Bell test requires spatially distributed entanglement, fast and high-efficiency detection and unpredictable measurement settings 2,3 . Although technology can satisfy the first two of these requirements 4-7, the use of physical devices to choose settings in a Bell test involves making assumptions about the physics that one aims to test. Bell himself noted this weakness in using physical setting choices and argued that human 'free will' could be used rigorously to ensure unpredictability in Bell tests 8 . Here we report a set of local-realism tests using human choices, which avoids assumptions about predictability in physics. We recruited about 100,000 human participants to play an online video game that incentivizes fast, sustained input of unpredictable selections and illustrates Bell-test methodology 9 . The participants generated 97,347,490 binary choices, which were directed via a scalable web platform to 12 laboratories on five continents, where 13 experiments tested local realism using photons 5,6, single atoms 7, atomic ensembles 10 and superconducting devices 11 . Over a 12-hour period on 30 November 2016, participants worldwide provided a sustained data flow of over 1,000 bits per second to the experiments, which used different human-generated data to choose each measurement setting. The observed correlations strongly contradict local realism and other realistic positions in bipartite and tripartite 12 scenarios. Project outcomes include closing the 'freedom-of-choice loophole' (the possibility that the setting choices are influenced by 'hidden variables' to correlate with the particle properties 13 ), the utilization of video-game methods 14 for rapid collection of human-generated randomness, and the use of networking techniques for global participation in experimental science

    Challenging local realism with human choices

    Full text link
    A Bell test is a randomized trial that compares experimental observations against the philosophical worldview of local realism. A Bell test requires spatially distributed entanglement, fast and high-efficiency detection and unpredictable measurement settings. Although technology can satisfy the first two of these requirements, the use of physical devices to choose settings in a Bell test involves making assumptions about the physics that one aims to test. Bell himself noted this weakness in using physical setting choices and argued that human `free will' could be used rigorously to ensure unpredictability in Bell tests. Here we report a set of local-realism tests using human choices, which avoids assumptions about predictability in physics. We recruited about 100,000 human participants to play an online video game that incentivizes fast, sustained input of unpredictable selections and illustrates Bell-test methodology. The participants generated 97,347,490 binary choices, which were directed via a scalable web platform to 12 laboratories on five continents, where 13 experiments tested local realism using photons, single atoms, atomic ensembles, and superconducting devices. Over a 12-hour period on 30 November 2016, participants worldwide provided a sustained data flow of over 1,000 bits per second to the experiments, which used different human-generated data to choose each measurement setting. The observed correlations strongly contradict local realism and other realistic positions in bipartite and tripartite scenarios. Project outcomes include closing the `freedom-of-choice loophole' (the possibility that the setting choices are influenced by `hidden variables' to correlate with the particle properties), the utilization of video-game methods for rapid collection of human generated randomness, and the use of networking techniques for global participation in experimental science.Comment: This version includes minor changes resulting from reviewer and editorial input. Abstract shortened to fit within arXiv limit

    Validar a guerra: a construção do regime de Expertise estratégica

    Full text link
    This article is intended to contribute to the interpretative analysis of war. For that purpose, it investigates how some apparatuses located in strategic thinking help to make modern war a social practice considered both technically feasible and, at the same time, legitimate for soldiers. In so doing, it makes use of two different but closely related theoretical fields, pragmatic sociology (finding inspiration in the work of scholars such as Luc Boltanski, Nicolas Dodier and Francis Chateauraynaud), and the sociology of scientific knowledge (based mostly on the work of Bruno Latour). On the one hand, the sociology of scientific knowledge has developed a productive questioning of the construction of scientific facts that is particularly relevant to the present research. On the other hand, pragmatic sociology generates a compatible framework able to describe collective actions. The combination of both approaches allows the description of the formation of a strategic expertise regime that supports the technical legitimacy of the use of military force. Together, the sociology of scientific knowledge and pragmatic sociology bring a particularly relevant perspective to research pertaining to war.info:eu-repo/semantics/publishe
    • …
    corecore