37,754 research outputs found
Recommended from our members
Application of temporal streamflow descriptors in hydrologic model parameter estimation
This paper presents a parameter estimation approach based on hydrograph descriptors that capture dominant streamflow characteristics at three timescales (monthly, yearly, and record extent). The scheme, entitled hydrograph descriptors multitemporal sensitivity analyses (HYDMUS), yields an ensemble of model simulations generated from a reduced parameter space, based on a set of streamflow descriptors that emphasize the timescale dynamics of streamflow record. In this procedure the posterior distributions of model parameters derived at coarser timescales are used to sample model parameters for the next finer timescale. The procedure was used to estimate the parameters of the Sacramento soil moisture accounting model (SAC-SMA) for the Leaf River, Mississippi. The results indicated that in addition to a significant reduction in the range of parameter uncertainty, HYDMUS improved parameter identifiability for all 13 of the model parameters. The performance of the procedure was compared to four previous calibration studies on the same watershed. Although our application of HYDMUS did not explicitly consider the error at each simulation time step during the calibration process, the model performance was, in some important respects, found to be better than in previous deterministic studies. Copyright 2005 by the American Geophysical Union
Ranking and Selection under Input Uncertainty: Fixed Confidence and Fixed Budget
In stochastic simulation, input uncertainty (IU) is caused by the error in
estimating the input distributions using finite real-world data. When it comes
to simulation-based Ranking and Selection (R&S), ignoring IU could lead to the
failure of many existing selection procedures. In this paper, we study R&S
under IU by allowing the possibility of acquiring additional data. Two
classical R&S formulations are extended to account for IU: (i) for fixed
confidence, we consider when data arrive sequentially so that IU can be reduced
over time; (ii) for fixed budget, a joint budget is assumed to be available for
both collecting input data and running simulations. New procedures are proposed
for each formulation using the frameworks of Sequential Elimination and Optimal
Computing Budget Allocation, with theoretical guarantees provided accordingly
(e.g., upper bound on the expected running time and finite-sample bound on the
probability of false selection). Numerical results demonstrate the
effectiveness of our procedures through a multi-stage production-inventory
problem
Flux and Photon Spectral Index Distributions of Fermi-LAT Blazars And Contribution To The Extragalactic Gamma-ray Background
We present a determination of the distributions of photon spectral index and
gamma-ray flux - the so called LogN-LogS relation - for the 352 blazars
detected with a greater than approximately seven sigma detection threshold and
located above +/- 20 degrees Galactic latitude by the Large Area Telescope of
the Fermi Gamma-ray Space Telescope in its first year catalog. Because the flux
detection threshold depends on the photon index, the observed raw distributions
do not provide the true LogN-LogS counts or the true distribution of the photon
index. We use the non-parametric methods developed by Efron and Petrosian to
reconstruct the intrinsic distributions from the observed ones which account
for the data truncations introduced by observational bias and includes the
effects of the possible correlation between the two variables. We demonstrate
the robustness of our procedures using a simulated data set of blazars and then
apply these to the real data and find that for the population as a whole the
intrinsic flux distribution can be represented by a broken power law with high
and low indexes of -2.37 +/- 0.13 and -1.70 +/- 0.26, respectively, and the
intrinsic photon index distribution can be represented by a Gaussian with mean
of 2.41 +/- 0.13 and width of 0.25 +/- 0.03. We also find the intrinsic
distributions for the sub-populations of BL Lac and FSRQs type blazars
separately. We then calculate the contribution of Fermi blazars to the diffuse
extragalactic gamma-ray background radiation. Under the assumption that the
flux distribution of blazars continues to arbitrarily low fluxes, we calculate
the best fit contribution of all blazars to the total extragalactic gamma-ray
output to be 60%, with a large uncertainty.Comment: 13 pages, 13 figures, 2 tables, updated to published version with
additional figure
SFI++ I: A New I-band Tully-Fisher Template, the Cluster Peculiar Velocity Dispersion and H0
The SFI++ consists of ~5000 spiral galaxies which have measurements suitable
for the application of the I-band Tully-Fisher (TF) relation. This sample
builds on the SCI and SFI samples published in the 1990s but includes
significant amounts of new data as well as improved methods for parameter
determination. We derive a new I-band TF relation from a subset of this sample
which consists of 807 galaxies in the fields of 31 nearby clusters and groups.
This sample constitutes the largest ever available for the calibration of the
TF template and extends the range of line-widths over which the template is
reliably measured. Careful accounting is made of observational and sample
biases such as incompleteness, finite cluster size, galaxy morphology and
environment. We find evidence for a type-dependent TF slope which is shallower
for early type than for late type spirals. The line-of-sight cluster peculiar
velocity dispersion is measured for the sample of 31 clusters. This value is
directly related to the spectrum of initial density fluctuations and thus
provides an independent verification of the best fit WMAP cosmology and an
estimate of Omega^0.6 sigma_8 = 0.52+/-0.06. We also provide an independent
measure of the TF zeropoint using 17 galaxies in the SFI++ sample for which
Cepheid distances are available. In combination with the ``basket of clusters''
template relation these calibrator galaxies provide a measure of H0 = 74+/-2
(random) +/-6 (systematic) km/s/Mpc.Comment: Accepted by ApJ (scheduled for 20 Dec 2006, issue 653). 21 pages (2
column emulateapj) including 12 figures. Version 2 corrects typos and other
small errors noticed in proof
Haplotype reconstruction error as a classical misclassification problem
Statistically reconstructing haplotypes from single nucleotide polymorphism (SNP) genotypes, can lead to falsely classified haplotypes. This can be an issue when interpreting haplotype association results or when selecting subjects with certain haplotypes for subsequent functional studies. It was our aim to quantify haplotype reconstruction error and to provide tools for it.
By numerous simulation scenarios, we systematically investigated several error measures, including discrepancy, error rate, and R(2), and introduced the sensitivity and specificity to this context. We exemplified several measures in the KORA study, a large population-based study from Southern Germany. We find that the specificity is slightly reduced only for common haplotypes, while the sensitivity was decreased for some, but not all rare haplotypes. The overall error rate was generally increasing with increasing number of loci, increasing minor allele frequency of SNPs, decreasing correlation between the alleles and increasing ambiguity.
We conclude that, with the analytical approach presented here, haplotype-specific error measures can be computed to gain insight into the haplotype uncertainty. This method provides the information, if a specific risk haplotype can be expected to be reconstructed with rather no or high misclassification and thus on the magnitude of expected bias in association estimates. We also illustrate that sensitivity and specificity separate two dimensions of the haplotype reconstruction error, which completely describe the misclassification matrix and thus provide the prerequisite for methods accounting for misclassification
Computation-Communication Trade-offs and Sensor Selection in Real-time Estimation for Processing Networks
Recent advances in electronics are enabling substantial processing to be
performed at each node (robots, sensors) of a networked system. Local
processing enables data compression and may mitigate measurement noise, but it
is still slower compared to a central computer (it entails a larger
computational delay). However, while nodes can process the data in parallel,
the centralized computational is sequential in nature. On the other hand, if a
node sends raw data to a central computer for processing, it incurs
communication delay. This leads to a fundamental communication-computation
trade-off, where each node has to decide on the optimal amount of preprocessing
in order to maximize the network performance. We consider a network in charge
of estimating the state of a dynamical system and provide three contributions.
First, we provide a rigorous problem formulation for optimal real-time
estimation in processing networks in the presence of delays. Second, we show
that, in the case of a homogeneous network (where all sensors have the same
computation) that monitors a continuous-time scalar linear system, the optimal
amount of local preprocessing maximizing the network estimation performance can
be computed analytically. Third, we consider the realistic case of a
heterogeneous network monitoring a discrete-time multi-variate linear system
and provide algorithms to decide on suitable preprocessing at each node, and to
select a sensor subset when computational constraints make using all sensors
suboptimal. Numerical simulations show that selecting the sensors is crucial.
Moreover, we show that if the nodes apply the preprocessing policy suggested by
our algorithms, they can largely improve the network estimation performance.Comment: 15 pages, 16 figures. Accepted journal versio
- …