67 research outputs found

    The Long-Baseline Neutrino Experiment: Exploring Fundamental Symmetries of the Universe

    Get PDF
    The preponderance of matter over antimatter in the early Universe, the dynamics of the supernova bursts that produced the heavy elements necessary for life and whether protons eventually decay --- these mysteries at the forefront of particle physics and astrophysics are key to understanding the early evolution of our Universe, its current state and its eventual fate. The Long-Baseline Neutrino Experiment (LBNE) represents an extensively developed plan for a world-class experiment dedicated to addressing these questions. LBNE is conceived around three central components: (1) a new, high-intensity neutrino source generated from a megawatt-class proton accelerator at Fermi National Accelerator Laboratory, (2) a near neutrino detector just downstream of the source, and (3) a massive liquid argon time-projection chamber deployed as a far detector deep underground at the Sanford Underground Research Facility. This facility, located at the site of the former Homestake Mine in Lead, South Dakota, is approximately 1,300 km from the neutrino source at Fermilab -- a distance (baseline) that delivers optimal sensitivity to neutrino charge-parity symmetry violation and mass ordering effects. This ambitious yet cost-effective design incorporates scalability and flexibility and can accommodate a variety of upgrades and contributions. With its exceptional combination of experimental configuration, technical capabilities, and potential for transformative discoveries, LBNE promises to be a vital facility for the field of particle physics worldwide, providing physicists from around the globe with opportunities to collaborate in a twenty to thirty year program of exciting science. In this document we provide a comprehensive overview of LBNE's scientific objectives, its place in the landscape of neutrino physics worldwide, the technologies it will incorporate and the capabilities it will possess.Comment: Major update of previous version. This is the reference document for LBNE science program and current status. Chapters 1, 3, and 9 provide a comprehensive overview of LBNE's scientific objectives, its place in the landscape of neutrino physics worldwide, the technologies it will incorporate and the capabilities it will possess. 288 pages, 116 figure

    Computing in High Energy and Nuclear Physics (CHEP) 2012

    No full text
    The Double Chooz (DC) reactor anti-neutrino experiment consists of a neutrino detector and a large area Outer Veto detector. A custom data-acquisition (DAQ) system written in Ada language for all the sub-detector in the neutrino detector systems and a generic object oriented data acquisition system for the Outer Veto detector were developed. Generic object-oriented programming was also used to support several electronic systems to be readout providing a simple interface for any new electronics to be added given its dedicated driver. The core electronics of the experiment is based on FADC electronics (500MHz sampling rate), therefore a data-reduction scheme has been implemented to reduce the data volume per trigger. A dynamic data-format was created to allow dynamic reduction of each trigger before data is written to disk. The decision is based on low level information that determines the relevance of each trigger. The DAQ is structured internally into two types of processors: several read-out processors reading and processing data at crate level and one event-builder processor collecting data from all crates and further processing data before writing into disk. An average rate of 40MB/s data output can be handled without dead-time. The Outer Veto DAQ uses a token-passing scheme to read out five daisy chains of multi-anode PMTs via five USB interfaces. The maximum rate that this system can handle is up to 40MB/s limited only by the USB2.0 throughput. A dynamic data reducer is implemented to reduce the amount of data written to disk. An object-oriented event builder process was developed to collect the data from the multiple USB streams and merge them into a single data stream ordered in time. A separate object oriented code was developed to merge the information coming from the neutrino and Outer Veto DAQ in a single event based on time information. The internal architecture and functioning of the Double Chooz DAQs as well as examples of performance and other capabilities will be described

    Design Tradeoffs in Applying Content Addressable Storage

    No full text
    This paper analyzes the usage data from a live deployment of an enterprise client management system based on virtual machine (VM) technology. Over a period of seven months, twenty-three volunteers used VM-based computing environments hosted by the system and created over 800 checkpoints of VM state, where each checkpoint included the virtual memory and disk states. Using this data, we study the design tradeoffs in applying content addressable storage (CAS) to such VM-based systems. In particular, we explore the impact on storage requirements and network load of different privacy properties and data granularities in the design of the underlying CAS system. The study clearly demonstrates that relaxing privacy can reduce the resource requirements of the system, and identifies designs that provide reasonable compromises between privacy and resource demands

    Design Tradeoffs in Applying Content Addressable Storage to Enterprise-scale Systems Based on Virtual Machines

    No full text
    This paper analyzes the usage data from a live deployment of an enterprise client management system based on virtual machine (VM) technology. Over a period of seven months, twenty-three volunteers used VM-based computing environments hosted by the system and created over 800 checkpoints of VM state, where each checkpoint included the virtual memory and disk states. Using this data, we study the design tradeoffs in applying content addressable storage (CAS) to such VM-based systems. In particular, we explore the impact on storage requirements and network load of different privacy properties and data granularities in the design of the underlying CAS system. The study clearly demonstrates that relaxing privacy can reduce the resource requirements of the system, and identifies designs that provide reasonable compromises between privacy and resource demands

    To Carry or To Find? Footloose on the Internet with a zero-pound laptop

    No full text
    Internet Suspend/Resume (ISR) is a new model of personal computing that cuts the tight binding between personal computing state and personal computing hardware. ISR is implemented by layering a virtual machine (VM) on distributed storage. The VM encapsulates execution and user customization state; distributed storage transports that state across space and time. In this paper, we explore the implications of ISR for an infrastructure-based approach to mobile computing. We report on our experience with three versions of ISR

    Coronal Heating as Determined by the Solar Flare Frequency Distribution Obtained by Aggregating Case Studies

    Full text link
    Flare frequency distributions represent a key approach to addressing one of the largest problems in solar and stellar physics: determining the mechanism that counter-intuitively heats coronae to temperatures that are orders of magnitude hotter than the corresponding photospheres. It is widely accepted that the magnetic field is responsible for the heating, but there are two competing mechanisms that could explain it: nanoflares or Alfv\'en waves. To date, neither can be directly observed. Nanoflares are, by definition, extremely small, but their aggregate energy release could represent a substantial heating mechanism, presuming they are sufficiently abundant. One way to test this presumption is via the flare frequency distribution, which describes how often flares of various energies occur. If the slope of the power law fitting the flare frequency distribution is above a critical threshold, α=2\alpha=2 as established in prior literature, then there should be a sufficient abundance of nanoflares to explain coronal heating. We performed >>600 case studies of solar flares, made possible by an unprecedented number of data analysts via three semesters of an undergraduate physics laboratory course. This allowed us to include two crucial, but nontrivial, analysis methods: pre-flare baseline subtraction and computation of the flare energy, which requires determining flare start and stop times. We aggregated the results of these analyses into a statistical study to determine that α=1.63±0.03\alpha = 1.63 \pm 0.03. This is below the critical threshold, suggesting that Alfv\'en waves are an important driver of coronal heating.Comment: 1,002 authors, 14 pages, 4 figures, 3 tables, published by The Astrophysical Journal on 2023-05-09, volume 948, page 7
    • …
    corecore