710 research outputs found

    Theories of Reference: What Was the Question?

    Get PDF
    The new theory of reference has won popularity. However, a number of noted philosophers have also attempted to reply to the critical arguments of Kripke and others, and aimed to vindicate the description theory of reference. Such responses are often based on ingenious novel kinds of descriptions, such as rigidified descriptions, causal descriptions, and metalinguistic descriptions. This prolonged debate raises the doubt whether different parties really have any shared understanding of what the central question of the philosophical theory of reference is: what is the main question to which descriptivism and the causal-historical theory have presented competing answers. One aim of the paper is to clarify this issue. The most influential objections to the new theory of reference are critically reviewed. Special attention is also paid to certain important later advances in the new theory of reference, due to Devitt and others

    Assessment of an electronic voting system within the tutorial setting: a randomised controlled trial (ISRCTN54535861)

    Get PDF
    Background: Electronic voting systems have been used in various educational settings with little measurement of the educational impact on students. The goal of this study was to measure the effects of the inclusion of an electronic voting system within a small group tutorial. Method: A prospective randomised controlled trial was run at the Royal Adelaide Hospital, a teaching hospital in Adelaide, Australia. 102 students in their first clinical year of medical school participated in the study where an electronic voting system was introduced as a teaching aid into a standard tutorial. Long-term retention of knowledge and understanding of the topics discussed in the tutorials was measured and student response to the introduction of the electronic voting system was assessed. Results: Students using the electronic voting system had improved long-term retention of understanding of material taught in the tutorial. Students had a positive response to the use of this teaching aid. Conclusion: Electronic voting systems can provide a stimulating learning environment for students and in a small group tutorial may improve educational outcomes.Edward J. Palmer, Peter G. Devitt, Neville J. De Young and David Morri

    The Explication Defence of Arguments from Reference

    Get PDF
    In a number of influential papers, Machery, Mallon, Nichols and Stich have presented a powerful critique of so-called arguments from reference, arguments that assume that a particular theory of reference is correct in order to establish a substantive conclusion. The critique is that, due to cross-cultural variation in semantic intuitions supposedly undermining the standard methodology for theorising about reference, the assumption that a theory of reference is correct is unjustified. I argue that the many extant responses to Machery et al.’s critique do little for the proponent of an argument from reference, as they do not show how to justify the problematic assumption. I then argue that it can in principle be justified by an appeal to Carnapian explication. I show how to apply the explication defence to arguments from reference given by Andreasen (for the biological reality of race) and by Churchland (against the existence of beliefs and desires)

    Integration of highly probabilistic sources into optical quantum architectures: perpetual quantum computation

    Full text link
    In this paper we introduce a design for an optical topological cluster state computer constructed exclusively from a single quantum component. Unlike previous efforts we eliminate the need for on demand, high fidelity photon sources and detectors and replace them with the same device utilised to create photon/photon entanglement. This introduces highly probabilistic elements into the optical architecture while maintaining complete specificity of the structure and operation for a large scale computer. Photons in this system are continually recycled back into the preparation network, allowing for a arbitrarily deep 3D cluster to be prepared using a comparatively small number of photonic qubits and consequently the elimination of high frequency, deterministic photon sources.Comment: 19 pages, 13 Figs (2 Appendices with additional Figs.). Comments welcom

    Surface code quantum computing by lattice surgery

    Full text link
    In recent years, surface codes have become a leading method for quantum error correction in theoretical large scale computational and communications architecture designs. Their comparatively high fault-tolerant thresholds and their natural 2-dimensional nearest neighbour (2DNN) structure make them an obvious choice for large scale designs in experimentally realistic systems. While fundamentally based on the toric code of Kitaev, there are many variants, two of which are the planar- and defect- based codes. Planar codes require fewer qubits to implement (for the same strength of error correction), but are restricted to encoding a single qubit of information. Interactions between encoded qubits are achieved via transversal operations, thus destroying the inherent 2DNN nature of the code. In this paper we introduce a new technique enabling the coupling of two planar codes without transversal operations, maintaining the 2DNN of the encoded computer. Our lattice surgery technique comprises splitting and merging planar code surfaces, and enables us to perform universal quantum computation (including magic state injection) while removing the need for braided logic in a strictly 2DNN design, and hence reduces the overall qubit resources for logic operations. Those resources are further reduced by the use of a rotated lattice for the planar encoding. We show how lattice surgery allows us to distribute encoded GHZ states in a more direct (and overhead friendly) manner, and how a demonstration of an encoded CNOT between two distance 3 logical states is possible with 53 physical qubits, half of that required in any other known construction in 2D.Comment: Published version. 29 pages, 18 figure

    Appetitive and Dietary Effects of Consuming an Energy-Dense Food (Peanuts) with or Between Meals by Snackers and Non-Snackers.

    Get PDF
    Energy-dense foods are inconsistently implicated in elevated energy intake (EI). This may stem from other food properties and/or differences in dietary incorporation, that is, as snacks or with meals.Objective. Assess intake pattern and food properties on acute appetitive ratings (AR) and EI. Design. 201 normal and overweight adults consuming a standard lunch. Test loads of 1255.2 kJ (300 kcal) were added to the lunch or provided as snack. Loads (peanuts, snack mix, and snack mix with peanuts) were energy, macronutrient, and volumetrically matched with a lunch portion as control. Participants completed meal and snack sessions of their randomly assigned load. Results. No differences were observed in daily EI or AR for meal versus snack or treatment versus control. Consumption of peanuts as a snack tended to strengthen dietary compensation compared to peanuts or other loads with a meal. Conclusions. Inclusion of an energy-dense food as a snack or meal component had comparable influence on AR and EI. Peanuts tended to elicit stronger dietary compensation when consumed as a snack versus with a meal. If substantiated, this latter observation suggests that properties other than those controlled here (energy, macronutrient content, and volume) modify AR and EI

    Simulating chemistry efficiently on fault-tolerant quantum computers

    Get PDF
    Quantum computers can in principle simulate quantum physics exponentially faster than their classical counterparts, but some technical hurdles remain. Here we consider methods to make proposed chemical simulation algorithms computationally fast on fault-tolerant quantum computers in the circuit model. Fault tolerance constrains the choice of available gates, so that arbitrary gates required for a simulation algorithm must be constructed from sequences of fundamental operations. We examine techniques for constructing arbitrary gates which perform substantially faster than circuits based on the conventional Solovay-Kitaev algorithm [C.M. Dawson and M.A. Nielsen, \emph{Quantum Inf. Comput.}, \textbf{6}:81, 2006]. For a given approximation error ϵ\epsilon, arbitrary single-qubit gates can be produced fault-tolerantly and using a limited set of gates in time which is O(logϵ)O(\log \epsilon) or O(loglogϵ)O(\log \log \epsilon); with sufficient parallel preparation of ancillas, constant average depth is possible using a method we call programmable ancilla rotations. Moreover, we construct and analyze efficient implementations of first- and second-quantized simulation algorithms using the fault-tolerant arbitrary gates and other techniques, such as implementing various subroutines in constant time. A specific example we analyze is the ground-state energy calculation for Lithium hydride.Comment: 33 pages, 18 figure

    Novel Approach for Evaluating Detector-Related Uncertainties in a LArTPC Using MicroBooNE Data

    Get PDF
    Primary challenges for current and future precision neutrino experiments using liquid argon time projection chambers (LArTPCs) include understanding detector effects and quantifying the associated systematic uncertainties. This paper presents a novel technique for assessing and propagating LArTPC detector-related systematic uncertainties. The technique makes modifications to simulation waveforms based on a parameterization of observed differences in ionization signals from the TPC between data and simulation, while remaining insensitive to the details of the detector model. The modifications are then used to quantify the systematic differences in low- and high-level reconstructed quantities. This approach could be applied to future LArTPC detectors, such as those used in SBN and DUNE

    New Theory-driven GENIE Tune for MicroBooNE

    Get PDF
    A novel tune has been made for the MicroBooNE experiment. The fit uses 4 new parameters within the GENIE v3.0.6 Monte Carlo program. Charged current pionless data from the T2K experiment was used. New uncertainties were obtained. These results will be used in future MicroBooNE analyses

    First Measurement of Inclusive Electron-Neutrino and Antineutrino Charged Current Differential Cross Sections in Charged Lepton Energy on Argon in MicroBooNE

    Get PDF
    We present the first measurement of the single-differential νe+νˉe\nu_e + \bar{\nu}_e charged-current inclusive cross sections on argon in electron or positron energy and in electron or positron scattering cosine over the full angular range. Data were collected using the MicroBooNE liquid argon time projection chamber located off-axis from the Fermilab Neutrinos at the Main Injector beam over an exposure of 2.0×10202.0\times10^{20} protons on target. The signal definition includes a 60 MeV threshold on the νe\nu_e or νˉe\bar{\nu}_e energy and a 120 MeV threshold on the electron or positron energy. The measured total and differential cross sections are found to be in agreement with the GENIE, NuWro, and GiBUU neutrino generators
    corecore