9,400 research outputs found

    Quantum Locality?

    Full text link
    Robert Griffiths has recently addressed, within the framework of a 'consistent quantum theory' that he has developed, the issue of whether, as is often claimed, quantum mechanics entails a need for faster-than-light transfers of information over long distances. He argues that the putative proofs of this property that involve hidden variables include in their premises some essentially classical-physics-type assumptions that are fundamentally incompatible with the precepts of quantum physics. One cannot logically prove properties of a system by establishing, instead, properties of a system modified by adding properties alien to the original system. Hence Griffiths' rejection of hidden-variable-based proofs is logically warranted. Griffiths mentions the existence of a certain alternative proof that does not involve hidden variables, and that uses only macroscopically described observable properties. He notes that he had examined in his book proofs of this general kind, and concluded that they provide no evidence for nonlocal influences. But he did not examine the particular proof that he cites. An examination of that particular proof by the method specified by his 'consistent quantum theory' shows that the cited proof is valid within that restrictive version of quantum theory. An added section responds to Griffiths' reply, which cites general possibilities of ambiguities that make what is to be proved ill-defined, and hence render the pertinent 'consistent framework' ill defined. But the vagaries that he cites do not upset the proof in question, which, both by its physical formulation and by explicit identification, specify the framework to be used. Griffiths confirms the validity of the proof insofar as that framework is used. The section also shows, in response to Griffiths' challenge, why a putative proof of locality that he has described is flawed.Comment: This version adds a response to Griffiths' reply to my original. It notes that Griffiths confirms the validity of my argument if one uses the framework that I use. Griffiths' objection that other frameworks exist is not germaine, because I use the unique one that satisfies the explicitly stated conditions that the choices be macroscopic choices of experiments and outcomes in a specified orde

    Population Differences in Death Rates in HIV-Positive Patients with Tuberculosis.

    Get PDF
    SETTING: Randomised controlled clinical trial of Mycobacterium vaccae vaccination as an adjunct to anti-tuberculosis treatment in human immunodeficiency virus (HIV) positive patients with smear-positive tuberculosis (TB) in Lusaka, Zambia, and Karonga, Malawi. OBJECTIVE: To explain the difference in mortality between the two trial sites and to identify risk factors for death among HIV-positive patients with TB. DESIGN: Information on demographic, clinical, laboratory and radiographic characteristics was collected. Patients in Lusaka (667) and in Karonga (84) were followed up for an average of 1.56 years. Cox proportional hazard analyses were used to assess differences in survival between the two sites and to determine risk factors associated with mortality during and after anti-tuberculosis treatment. RESULTS: The case fatality rate was 14.7% in Lusaka and 21.4% in Karonga. The hazard ratio for death comparing Karonga to Lusaka was 1.47 (95% confidence interval [CI] 0.9-2.4) during treatment and 1.76 (95%CI 1.0-3.0) after treatment. This difference could be almost entirely explained by age and more advanced HIV disease among patients in Karonga. CONCLUSION: It is important to understand the reasons for population differences in mortality among patients with TB and HIV and to maximise efforts to reduce mortality

    Assessing Human Error Against a Benchmark of Perfection

    Full text link
    An increasing number of domains are providing us with detailed trace data on human decisions in settings where we can evaluate the quality of these decisions via an algorithm. Motivated by this development, an emerging line of work has begun to consider whether we can characterize and predict the kinds of decisions where people are likely to make errors. To investigate what a general framework for human error prediction might look like, we focus on a model system with a rich history in the behavioral sciences: the decisions made by chess players as they select moves in a game. We carry out our analysis at a large scale, employing datasets with several million recorded games, and using chess tablebases to acquire a form of ground truth for a subset of chess positions that have been completely solved by computers but remain challenging even for the best players in the world. We organize our analysis around three categories of features that we argue are present in most settings where the analysis of human error is applicable: the skill of the decision-maker, the time available to make the decision, and the inherent difficulty of the decision. We identify rich structure in all three of these categories of features, and find strong evidence that in our domain, features describing the inherent difficulty of an instance are significantly more powerful than features based on skill or time.Comment: KDD 2016; 10 page

    A quantum group version of quantum gauge theories in two dimensions

    Full text link
    For the special case of the quantum group SLq(2,C) (q=expπi/r, r3)SL_q (2,{\bf C})\ (q= \exp \pi i/r,\ r\ge 3) we present an alternative approach to quantum gauge theories in two dimensions. We exhibit the similarities to Witten's combinatorial approach which is based on ideas of Migdal. The main ingredient is the Turaev-Viro combinatorial construction of topological invariants of closed, compact 3-manifolds and its extension to arbitrary compact 3-manifolds as given by the authors in collaboration with W. Mueller.Comment: 6 pages (plain TeX

    Numerical Simulation of Vortex Crystals and Merging in N-Point Vortex Systems with Circular Boundary

    Full text link
    In two-dimensional (2D) inviscid incompressible flow, low background vorticity distribution accelerates intense vortices (clumps) to merge each other and to array in the symmetric pattern which is called ``vortex crystals''; they are observed in the experiments on pure electron plasma and the simulations of Euler fluid. Vortex merger is thought to be a result of negative ``temperature'' introduced by L. Onsager. Slight difference in the initial distribution from this leads to ``vortex crystals''. We study these phenomena by examining N-point vortex systems governed by the Hamilton equations of motion. First, we study a three-point vortex system without background distribution. It is known that a N-point vortex system with boundary exhibits chaotic behavior for N\geq 3. In order to investigate the properties of the phase space structure of this three-point vortex system with circular boundary, we examine the Poincar\'e plot of this system. Then we show that topology of the Poincar\'e plot of this system drastically changes when the parameters, which are concerned with the sign of ``temperature'', are varied. Next, we introduce a formula for energy spectrum of a N-point vortex system with circular boundary. Further, carrying out numerical computation, we reproduce a vortex crystal and a vortex merger in a few hundred point vortices system. We confirm that the energy of vortices is transferred from the clumps to the background in the course of vortex crystallization. In the vortex merging process, we numerically calculate the energy spectrum introduced above and confirm that it behaves as k^{-\alpha},(\alpha\approx 2.2-2.8) at the region 10^0<k<10^1 after the merging.Comment: 30 pages, 11 figures. to be published in Journal of Physical Society of Japan Vol.74 No.

    Assessing dentists' knowledge and experience in restoring endodontically treated teeth using post & cores

    Get PDF
    OBJECTIVES: The restoration of endodontically, heavily filled teeth has been a challenge for the dental profession for decades. The aims of this study were to investigate dentists' experience and knowledge in the use of post & core when restoring endodontically treated teeth. METHOD: This was a mixed method study incorporating quantitative and qualitative data collection. An online questionnaire was developed and distributed, comprised of 18 questions. It was calculated that 93 respondents were needed to validate the study of which 60% should meet a minimum knowledge requirement. RESULTS: 173 respondents completed the questionnaire. 109 (63% (95%CI56%,70%) demonstrated proficient knowledge of post & core restorations. Recent graduates were more likely to follow current guidelines (F=4.570: P<0.034). As the age of the respondent increases the number of posts placed (F=18.85; p<0.001) and the perceived confidence level increases (Spearman's Rho 0.43: P<0.01). Experience of postgraduate education also positively influenced clinical confidence. CONCLUSION: The placement of post & cores is influenced by age. Confidence is also influenced by age. More evidence on post usage is required and several questions remain to be answered on what drives decision making and perceived long-term success. CLINICAL SIGNIFICANCE: There is a general acceptance of when a post and core restoration should be used. Clinician experience and age can have an impact on what type of restorations are used. Fibre posts are more commonly used due their accessibility and cost

    On Quasiperiodic Morphisms

    Full text link
    Weakly and strongly quasiperiodic morphisms are tools introduced to study quasiperiodic words. Formally they map respectively at least one or any non-quasiperiodic word to a quasiperiodic word. Considering them both on finite and infinite words, we get four families of morphisms between which we study relations. We provide algorithms to decide whether a morphism is strongly quasiperiodic on finite words or on infinite words.Comment: 12 page

    Downsizing of supermassive black holes from the SDSS quasar survey (II). Extension to z~4

    Get PDF
    Starting from the quasar sample of the Sloan Digital Sky Survey (SDSS) for which the CIV line is observed, we use an analysis scheme to derive the z-dependence of the maximum mass of active black holes, which overcomes the problems related to the Malmquist bias. The same procedure is applied to the low redshift sample of SDSS quasars for which Hbeta measurements are available. Combining with the results from the previously studied MgII sample, we find that the maximum mass of the quasar population increases as (1+z)^(1.64+/-0.04) in the redshift range 0.1<z<4, which includes the epoch of maximum quasar activity.Comment: 9 pages, 8 figures. To appear in MNRA
    corecore