137 research outputs found
Application of the WEPP model with digital geographic information
The Water Erosion Prediction Project (WEPP) is a process-based continuous simulation
erosion model that can be applied to hillslope profiles and small watersheds. One
limitation to application of WEPP (or other models) to the field or farm scale is the
difficulty in determining the watershed structure, which may be composed of multiple
channels and profiles (and potentially other features as well). This presentation describes current efforts to link the WEPP model with Geographic Information Systems (GIS) and utilize Digital Elevation Model (DEM) data to generate the necessary topographic inputs for erosion model simulations. Two automated approaches for applying the WEPP model
have been developed and compared to manual application of the model. The first approach (named the Hillslope method) uses information from a DEM to delineate the watershed boundary, channel and hillslope locations, and then configure "representative"
hillslope slope profiles from the myriad flowpath data. The second approach (named the
Flowpath method) also uses DEM information to delineate the watershed boundary, but
then runs WEPP model simulations on every flowpath within a watershed. For a set of research watersheds, the automatic Hillslope method performed as well as a manual
application of WEPP by an expert user in predictions of runoff and sediment loss. Tests
also showed that the Hillslope and Flowpath methods were not significantly different
than each other or different from manual model applications in predictions of hillslope erosion. Additional research work ongoing at the National Soil Erosion Research Laboratory is examining the feasibility of using commonly available digital elevation
data (for example from on-vehicle Geographical Positioning Systems (GPS)) to provide input for the automated techniques for driving the erosion model
Gravitational waves from eccentric compact binaries: Reduction in signal-to-noise ratio due to nonoptimal signal processing
Inspiraling compact binaries have been identified as one of the most
promising sources of gravitational waves for interferometric detectors. Most of
these binaries are expected to have circularized by the time their
gravitational waves enter the instrument's frequency band. However, the
possibility that some of the binaries might still possess a significant
eccentricity is not excluded. We imagine a situation in which eccentric signals
are received by the detector but not explicitly searched for in the data
analysis, which uses exclusively circular waveforms as matched filters. We
ascertain the likelihood that these filters, though not optimal, will
nevertheless be successful at capturing the eccentric signals. We do this by
computing the loss in signal-to-noise ratio incurred when searching for
eccentric signals with those nonoptimal filters. We show that for a binary
system of a given total mass, this loss increases with increasing eccentricity.
We show also that for a given eccentricity, the loss decreases as the total
mass is increased.Comment: 14 pages, 4 figures, ReVTeX; minor changes made after referee's
comment
Safety verification of asynchronous pushdown systems with shaped stacks
In this paper, we study the program-point reachability problem of concurrent
pushdown systems that communicate via unbounded and unordered message buffers.
Our goal is to relax the common restriction that messages can only be retrieved
by a pushdown process when its stack is empty. We use the notion of partially
commutative context-free grammars to describe a new class of asynchronously
communicating pushdown systems with a mild shape constraint on the stacks for
which the program-point coverability problem remains decidable. Stacks that fit
the shape constraint may reach arbitrary heights; further a process may execute
any communication action (be it process creation, message send or retrieval)
whether or not its stack is empty. This class extends previous computational
models studied in the context of asynchronous programs, and enables the safety
verification of a large class of message passing programs
Automatic Abstraction in SMT-Based Unbounded Software Model Checking
Software model checkers based on under-approximations and SMT solvers are
very successful at verifying safety (i.e. reachability) properties. They
combine two key ideas -- (a) "concreteness": a counterexample in an
under-approximation is a counterexample in the original program as well, and
(b) "generalization": a proof of safety of an under-approximation, produced by
an SMT solver, are generalizable to proofs of safety of the original program.
In this paper, we present a combination of "automatic abstraction" with the
under-approximation-driven framework. We explore two iterative approaches for
obtaining and refining abstractions -- "proof based" and "counterexample based"
-- and show how they can be combined into a unified algorithm. To the best of
our knowledge, this is the first application of Proof-Based Abstraction,
primarily used to verify hardware, to Software Verification. We have
implemented a prototype of the framework using Z3, and evaluate it on many
benchmarks from the Software Verification Competition. We show experimentally
that our combination is quite effective on hard instances.Comment: Extended version of a paper in the proceedings of CAV 201
Inferring Loop Invariants using Postconditions
One of the obstacles in automatic program proving is to obtain suitable loop
invariants.
The invariant of a loop is a weakened form of its postcondition (the loop's
goal, also known as its contract); the present work takes advantage of this
observation by using the postcondition as the basis for invariant inference,
using various heuristics such as "uncoupling" which prove useful in many
important algorithms.
Thanks to these heuristics, the technique is able to infer invariants for a
large variety of loop examples.
We present the theory behind the technique, its implementation (freely
available for download and currently relying on Microsoft Research's Boogie
tool), and the results obtained.Comment: Slightly revised versio
Analysis of LIGO data for gravitational waves from binary neutron stars
We report on a search for gravitational waves from coalescing compact binary
systems in the Milky Way and the Magellanic Clouds. The analysis uses data
taken by two of the three LIGO interferometers during the first LIGO science
run and illustrates a method of setting upper limits on inspiral event rates
using interferometer data. The analysis pipeline is described with particular
attention to data selection and coincidence between the two interferometers. We
establish an observational upper limit of 1.7 \times 10^{2}M_\odot$.Comment: 17 pages, 9 figure
Volume I. Introduction to DUNE
The preponderance of matter over antimatter in the early universe, the dynamics of the supernovae that produced the heavy elements necessary for life, and whether protons eventually decayâthese mysteries at the forefront of particle physics and astrophysics are key to understanding the early evolution of our universe, its current state, and its eventual fate. The Deep Underground Neutrino Experiment (DUNE) is an international world-class experiment dedicated to addressing these questions as it searches for leptonic charge-parity symmetry violation, stands ready to capture supernova neutrino bursts, and seeks to observe nucleon decay as a signature of a grand unified theory underlying the standard model. The DUNE far detector technical design report (TDR) describes the DUNE physics program and the technical designs of the single- and dual-phase DUNE liquid argon TPC far detector modules. This TDR is intended to justify the technical choices for the far detector that flow down from the high-level physics goals through requirements at all levels of the Project. Volume I contains an executive summary that introduces the DUNE science program, the far detector and the strategy for its modular designs, and the organization and management of the Project. The remainder of Volume I provides more detail on the science program that drives the choice of detector technologies and on the technologies themselves. It also introduces the designs for the DUNE near detector and the DUNE computing model, for which DUNE is planning design reports. Volume II of this TDR describes DUNE\u27s physics program in detail. Volume III describes the technical coordination required for the far detector design, construction, installation, and integration, and its organizational structure. Volume IV describes the single-phase far detector technology. A planned Volume V will describe the dual-phase technology
Measurement of the inclusive 3-jet production differential cross section in proton-proton collisions at 7 TeVÂ and determination of the strong coupling constant in the TeVÂ range
This paper presents a measurement of the inclusive 3-jet production differential cross section at a proton–proton centre-of-mass energy of 7 TeV using data corresponding to an integrated luminosity of 5 fb-1 collected with the CMS detector. The analysis is based on the three jets with the highest transverse momenta. The cross section is measured as a function of the invariant mass of the three jets in a range of 445–3270 GeV and in two bins of the maximum rapidity of the jets up to a value of 2. A comparison between the measurement and the prediction from perturbative QCD at next-to-leading order is performed. Within uncertainties, data and theory are in agreement. The sensitivity of the observable to the strong coupling constant αS is studied. A fit to all data points with 3-jet masses larger than 664 GeV gives a value of the strong coupling constant of αS(MZ)=0.1171±0.0013(exp)-0.0047+0.0073(theo)
Measurement of the differential cross section for top quark pair production in pp collisions at âs=8 TeV
The normalized differential cross section for top quark pair (ttÂŻ) production is measured in pp collisions at a centre-of-mass energy of 8TeV at the CERN LHC using the CMS detector in data corresponding to an integrated luminosity of 19.7fb-1. The measurements are performed in the lepton+jets (e/ÎŒ++jets) and in the dilepton e+e-, ÎŒ+ÎŒ-, and e±Όâ) decay channels. The ttÂŻ cross section is measured as a function of the kinematic properties of the charged leptons, the jets associated to b quarks, the top quarks, and the ttÂŻ system. The data are compared with several predictions from perturbative quantum chromodynamic up to approximate next-to-next-to-leading-order precision. No significant deviations are observed relative to the standard model predictions. © 2015, CERN for the benefit of the CMS collaboration
- âŠ