307 research outputs found
Search for neutral Higgs bosons decaying to tau pairs in association with b-quarks at the D0 Detector
We report results from a search for neutral Higgs bosons decaying to tau
pairs produced in association with a b-quark in 1.2/fb of data taken from June
2006 to August 2007 with the D0 detector at Fermi National Accelerator
Laboratory. The final state includes a muon, hadronically decaying tau and jet
identified as coming from a -quark. We set cross section times branching
ratio limits on production of such neutral Higgs bosons in the mass range from
90 GeV/c^2 to 160 GeV/c^2. Exclusion limits are set at the 95% Confidence Level
for several supersymmetric scenarios.Comment: 4 pages, 4 figures, Proceedings of the Lake Louise Winter Institute
200
Search for neutral Higgs bosons decaying to tau pairs produced in association with b-quarks at s**(1/2)=1.96 TeV
We report results from a search for neutral Higgs bosons decaying to tau pairs produced in association with a b-quark in 1.6 fb{sup -1} of data taken from June 2006 to March 2008 with the D0 detector at Fermi National Accelerator Laboratory. The final state includes a muon, hadronically decaying tau, and jet identified as coming from a b-quark. We set cross section times branching ratio limits on production of such neutral Higgs bosons {phi} in the mass range from 90 GeV to 160 GeV. Exclusion limits are set at the 95% Confidence Level for several supersymmetric scenarios
Cosmology with Gravitational Waves in des and LSST
Motivated by the prospect of the wealth of data arising from the inauguration of the era of gravitational wave detection by ground-based interferometers the DES collaboration, in partnership with members of the LIGO collaboration and members of the astronomical community at large, have established a research program to search for their optical counterparts and to explore their use as cosmological probes. In this talk we present the status of our program and discuss prospects for establishing this new probe as part of the portfolio of the Dark Energy research program in the future, in particular for the next generation survey, LSST
Accelerating Machine Learning Inference with GPUs in ProtoDUNE Data Processing
We study the performance of a cloud-based GPU-accelerated inference server to
speed up event reconstruction in neutrino data batch jobs. Using detector data
from the ProtoDUNE experiment and employing the standard DUNE grid job
submission tools, we attempt to reprocess the data by running several thousand
concurrent grid jobs, a rate we expect to be typical of current and future
neutrino physics experiments. We process most of the dataset with the GPU
version of our processing algorithm and the remainder with the CPU version for
timing comparisons. We find that a 100-GPU cloud-based server is able to easily
meet the processing demand, and that using the GPU version of the event
processing algorithm is two times faster than processing these data with the
CPU version when comparing to the newest CPUs in our sample. The amount of data
transferred to the inference server during the GPU runs can overwhelm even the
highest-bandwidth network switches, however, unless care is taken to observe
network facility limits or otherwise distribute the jobs to multiple sites. We
discuss the lessons learned from this processing campaign and several avenues
for future improvements.Comment: 13 pages, 9 figures, matches accepted versio
Darshan for HEP applications
Modern HEP workflows must manage increasingly large and complex data collections. HPC facilities may be employed to help meet these workflows’ growing data processing needs. However, a better understanding of the I/O patterns and underlying bottlenecks of these workflows is necessary to meet the performance expectations of HPC systems.
Darshan is a lightweight I/O characterization tool that captures concise views of HPC application I/O behavior. It intercepts application I/O calls at runtime, records file access statistics for each process, and generates log files detailing application I/O access patterns.
Typical HEP workflows include event generation, detector simulation, event reconstruction, and subsequent analysis stages. A study of the I/O behavior of the ATLAS simulation and filtering stage, and the CMS simulation workflow using Darshan is presented, including insights into the I/O operations and data access size
The Gravity Collective: A Search for the Electromagnetic Counterpart to the Neutron Star-Black Hole Merger GW190814
We present optical follow-up imaging obtained with the Katzman Automatic
Imaging Telescope, Las Cumbres Observatory Global Telescope Network, Nickel
Telescope, Swope Telescope, and Thacher Telescope of the LIGO/Virgo
gravitational wave (GW) signal from the neutron star-black hole (NSBH) merger
GW190814. We searched the GW190814 localization region (19 deg for the
90th percentile best localization), covering a total of 51 deg and 94.6%
of the two-dimensional localization region. Analyzing the properties of 189
transients that we consider as candidate counterparts to the NSBH merger,
including their localizations, discovery times from merger, optical spectra,
likely host-galaxy redshifts, and photometric evolution, we conclude that none
of these objects are likely to be associated with GW190814. Based on this
finding, we consider the likely optical properties of an electromagnetic
counterpart to GW190814, including possible kilonovae and short gamma-ray burst
afterglows. Using the joint limits from our follow-up imaging, we conclude that
a counterpart with an -band decline rate of 0.68 mag day, similar to
the kilonova AT 2017gfo, could peak at an absolute magnitude of at most
mag (50% confidence). Our data are not constraining for ''red'' kilonovae and
rule out ''blue'' kilonovae with (30% confidence). We
strongly rule out all known types of short gamma-ray burst afterglows with
viewing angles 17 assuming an initial jet opening angle of
and explosion energies and circumburst densities similar to
afterglows explored in the literature. Finally, we explore the possibility that
GW190814 merged in the disk of an active galactic nucleus, of which we find
four in the localization region, but we do not find any candidate counterparts
among these sources.Comment: 86 pages, 9 figure
DUNE Production processing and workflow management software evaluation
The Deep Underground Neutrino Experiment (DUNE) will be the world’s foremost neutrino detector when it begins taking data in the mid-2020s. Two prototype detectors, collectively known as ProtoDUNE, have begun taking data at CERN and have accumulated over 3 PB of raw and reconstructed data since September 2018. Particle interaction within liquid argon time projection chambers are challenging to reconstruct, and the collaboration has set up a dedicated Production Processing group to perform centralized reconstruction of the large ProtoDUNE datasets as well as to generate large-scale Monte Carlo simulation. Part of the production infrastructure includes workflow management software and monitoring tools that are necessary to efficiently submit and monitor the large and diverse set of jobs needed to meet the experiment’s goals. We will give a brief overview of DUNE and ProtoDUNE, describe the various types of jobs within the Production Processing group’s purview, and discuss the software and workflow management strategies are currently in place to meet existing demand. We will conclude with a description of our requirements in a workflow management software solution and our planned evaluation process.</jats:p
DUNE Production processing and workflow management software evaluation
The Deep Underground Neutrino Experiment (DUNE) will be the world’s foremost neutrino detector when it begins taking data in the mid-2020s. Two prototype detectors, collectively known as ProtoDUNE, have begun taking data at CERN and have accumulated over 3 PB of raw and reconstructed data since September 2018. Particle interaction within liquid argon time projection chambers are challenging to reconstruct, and the collaboration has set up a dedicated Production Processing group to perform centralized reconstruction of the large ProtoDUNE datasets as well as to generate large-scale Monte Carlo simulation. Part of the production infrastructure includes workflow management software and monitoring tools that are necessary to efficiently submit and monitor the large and diverse set of jobs needed to meet the experiment’s goals. We will give a brief overview of DUNE and ProtoDUNE, describe the various types of jobs within the Production Processing group’s purview, and discuss the software and workflow management strategies are currently in place to meet existing demand. We will conclude with a description of our requirements in a workflow management software solution and our planned evaluation process
- …