55 research outputs found
Blame Trees
We consider the problem of merging individual text documents, motivated by the single-file merge algorithms of document-based version control systems. Abstracting away the merging of conflicting edits to an external conflict resolution function (possibly implemented by a human), we consider the efficient identification of conflicting regions. We show how to implement tree-based document representation to quickly answer a data structure inspired by the âblameâ query of some version control systems. A âblameâ query associates every line of a document with the revision in which it was last edited. Our tree uses this idea to quickly identify conflicting edits. We show how to perform a merge operation in time proportional to the sum of the logarithms of the shared regions of the documents, plus the cost of conflict resolution. Our data structure is functional and therefore confluently persistent, allowing arbitrary version DAGs as in real version-control systems. Our results rely on concurrent traversal of two trees with short circuiting when shared subtrees are encountered.United States. Defense Advanced Research Projects Agency (Clean-Slate Design of Resilient, Adaptive, Secure Hosts (CRASH) program, BAA10-70)United States. Defense Advanced Research Projects Agency (contract #N66001-10-2-4088 (Bridging the Security Gap with Decentralized Information Flow Control))Danish National Research Foundation (Center for Massive Data Algorithmics (MADALGO)
The 8Li Calibration Source for the Sudbury Neutrino Obervatory
A calibration source employing 8Li (t_1/2 = 0.838s) has been developed for
use with the Sudbury Neutrino Observatory (SNO). This source creates a spectrum
of beta particles with an energy range similar to that of the SNO 8B solar
neutrino signal. The source is used to test the SNO detector's energy response,
position reconstruction and data reduction algorithms. The 8Li isotope is
created using a deuterium-tritium neutron generator in conjunction with a 11B
target, and is carried to a decay chamber using a gas/aerosol transport system.
The decay chamber detects prompt alpha particles by gas scintillation in
coincidence with the beta particles which exit through a thin stainless steel
wall. A description is given of the production, transport, and tagging
techniques along with a discussion of the performance and application of the
source.Comment: 11 pages plus 9 figures, Sumbitted to Nuclear Instruments and Methods
The Cosmic Microwave Background in an Inhomogeneous Universe - why void models of dark energy are only weakly constrained by the CMB
The dimming of Type Ia supernovae could be the result of Hubble-scale
inhomogeneity in the matter and spatial curvature, rather than signaling the
presence of a dark energy component. A key challenge for such models is to fit
the detailed spectrum of the cosmic microwave background (CMB). We present a
detailed discussion of the small-scale CMB in an inhomogeneous universe,
focusing on spherically symmetric `void' models. We allow for the dynamical
effects of radiation while analyzing the problem, in contrast to other work
which inadvertently fine tunes its spatial profile. This is a surprisingly
important effect and we reach substantially different conclusions. Models which
are open at CMB distances fit the CMB power spectrum without fine tuning; these
models also fit the supernovae and local Hubble rate data which favours a high
expansion rate. Asymptotically flat models may fit the CMB, but require some
extra assumptions. We argue that a full treatment of the radiation in these
models is necessary if we are to understand the correct constraints from the
CMB, as well as other observations which rely on it, such as spectral
distortions of the black body spectrum, the kinematic Sunyaev-Zeldovich effect
or the Baryon Acoustic Oscillations.Comment: 23 pages with 14 figures. v2 has considerably extended discussion and
analysis, but the basic results are unchanged. v3 is the final versio
Local Void vs Dark Energy: Confrontation with WMAP and Type Ia Supernovae
It is now a known fact that if we happen to be living in the middle of a
large underdense region, then we will observe an "apparent acceleration", even
when any form of dark energy is absent. In this paper, we present a "Minimal
Void" scenario, i.e. a "void" with minimal underdensity contrast (of about
-0.4) and radius (~ 200-250 Mpc/h) that can, not only explain the supernovae
data, but also be consistent with the 3-yr WMAP data. We also discuss
consistency of our model with various other measurements such as Big Bang
Nucleosynthesis, Baryon Acoustic Oscillations and local measurements of the
Hubble parameter, and also point out possible observable signatures.Comment: Minor numerical errors and typos corrected, references adde
The Sudbury Neutrino Observatory
The Sudbury Neutrino Observatory is a second generation water Cherenkov
detector designed to determine whether the currently observed solar neutrino
deficit is a result of neutrino oscillations. The detector is unique in its use
of D2O as a detection medium, permitting it to make a solar model-independent
test of the neutrino oscillation hypothesis by comparison of the charged- and
neutral-current interaction rates. In this paper the physical properties,
construction, and preliminary operation of the Sudbury Neutrino Observatory are
described. Data and predicted operating parameters are provided whenever
possible.Comment: 58 pages, 12 figures, submitted to Nucl. Inst. Meth. Uses elsart and
epsf style files. For additional information about SNO see
http://www.sno.phy.queensu.ca . This version has some new reference
Volume I. Introduction to DUNE
The preponderance of matter over antimatter in the early universe, the dynamics of the supernovae that produced the heavy elements necessary for life, and whether protons eventually decayâthese mysteries at the forefront of particle physics and astrophysics are key to understanding the early evolution of our universe, its current state, and its eventual fate. The Deep Underground Neutrino Experiment (DUNE) is an international world-class experiment dedicated to addressing these questions as it searches for leptonic charge-parity symmetry violation, stands ready to capture supernova neutrino bursts, and seeks to observe nucleon decay as a signature of a grand unified theory underlying the standard model. The DUNE far detector technical design report (TDR) describes the DUNE physics program and the technical designs of the single- and dual-phase DUNE liquid argon TPC far detector modules. This TDR is intended to justify the technical choices for the far detector that flow down from the high-level physics goals through requirements at all levels of the Project. Volume I contains an executive summary that introduces the DUNE science program, the far detector and the strategy for its modular designs, and the organization and management of the Project. The remainder of Volume I provides more detail on the science program that drives the choice of detector technologies and on the technologies themselves. It also introduces the designs for the DUNE near detector and the DUNE computing model, for which DUNE is planning design reports. Volume II of this TDR describes DUNE\u27s physics program in detail. Volume III describes the technical coordination required for the far detector design, construction, installation, and integration, and its organizational structure. Volume IV describes the single-phase far detector technology. A planned Volume V will describe the dual-phase technology
Deep Underground Neutrino Experiment (DUNE), far detector technical design report, volume III: DUNE far detector technical coordination
The preponderance of matter over antimatter in the early universe, the dynamics of the supernovae that produced the heavy elements necessary for life, and whether protons eventually decayâthese mysteries at the forefront of particle physics and astrophysics are key to understanding the early evolution of our universe, its current state, and its eventual fate. The Deep Underground Neutrino Experiment (DUNE) is an international world-class experiment dedicated to addressing these questions as it searches for leptonic charge-parity symmetry violation, stands ready to capture supernova neutrino bursts, and seeks to observe nucleon decay as a signature of a grand unified theory underlying the standard model. The DUNE far detector technical design report (TDR) describes the DUNE physics program and the technical designs of the single- and dual-phase DUNE liquid argon TPC far detector modules. Volume III of this TDR describes how the activities required to design, construct, fabricate, install, and commission the DUNE far detector modules are organized and managed. This volume details the organizational structures that will carry out and/or oversee the planned far detector activities safely, successfully, on time, and on budget. It presents overviews of the facilities, supporting infrastructure, and detectors for context, and it outlines the project-related functions and methodologies used by the DUNE technical coordination organization, focusing on the areas of integration engineering, technical reviews, quality assurance and control, and safety oversight. Because of its more advanced stage of development, functional examples presented in this volume focus primarily on the single-phase (SP) detector module
Highly-parallelized simulation of a pixelated LArTPC on a GPU
The rapid development of general-purpose computing on graphics processing units (GPGPU) is allowing the implementation of highly-parallelized Monte Carlo simulation chains for particle physics experiments. This technique is particularly suitable for the simulation of a pixelated charge readout for time projection chambers, given the large number of channels that this technology employs. Here we present the first implementation of a full microphysical simulator of a liquid argon time projection chamber (LArTPC) equipped with light readout and pixelated charge readout, developed for the DUNE Near Detector. The software is implemented with an end-to-end set of GPU-optimized algorithms. The algorithms have been written in Python and translated into CUDA kernels using Numba, a just-in-time compiler for a subset of Python and NumPy instructions. The GPU implementation achieves a speed up of four orders of magnitude compared with the equivalent CPU version. The simulation of the current induced on 10^3 pixels takes around 1 ms on the GPU, compared with approximately 10 s on the CPU. The results of the simulation are compared against data from a pixel-readout LArTPC prototype
- âŠ