7,133 research outputs found
Probabilities of spurious connections in gene networks: Application to expression time series
Motivation: The reconstruction of gene networks from gene expression
microarrays is gaining popularity as methods improve and as more data become
available. The reliability of such networks could be judged by the probability
that a connection between genes is spurious, resulting from chance fluctuations
rather than from a true biological relationship. Results: Unlike the false
discovery rate and positive false discovery rate, the decisive false discovery
rate (dFDR) is exactly equal to a conditional probability without assuming
independence or the randomness of hypothesis truth values. This property is
useful not only in the common application to the detection of differential gene
expression, but also in determining the probability of a spurious connection in
a reconstructed gene network. Estimators of the dFDR can estimate each of three
probabilities: 1. The probability that two genes that appear to be associated
with each other lack such association. 2. The probability that a time ordering
observed for two associated genes is misleading. 3. The probability that a time
ordering observed for two genes is misleading, either because they are not
associated or because they are associated without a lag in time. The first
probability applies to both static and dynamic gene networks, and the other two
only apply to dynamic gene networks. Availability: Cross-platform software for
network reconstruction, probability estimation, and plotting is free from
http://www.davidbickel.com as R functions and a Java application.Comment: Like q-bio.GN/0404032, this was rejected in March 2004 because it was
submitted to the math archive. The only modification is a corrected reference
to q-bio.GN/0404032, which was not modified at al
ICE Second Halley radial: TDA mission support and DSN operations
The article documents the operations encompassing the International Cometary Explorer (ICE) second Halley radial experiment centered around March 28, 1986. The support was provided by the Deep Space Network (DSN) 64-meter subnetwork. Near continuous support was provided the last two weeks of March and the first two weeks of April to insure the collection of adequate background data for the Halley radial experiment. During the last week of March, plasma wave measurements indicate that ICE was within the Halley heavy ion pick-up region
A nonparametric empirical Bayes framework for large-scale multiple testing
We propose a flexible and identifiable version of the two-groups model,
motivated by hierarchical Bayes considerations, that features an empirical null
and a semiparametric mixture model for the non-null cases. We use a
computationally efficient predictive recursion marginal likelihood procedure to
estimate the model parameters, even the nonparametric mixing distribution. This
leads to a nonparametric empirical Bayes testing procedure, which we call
PRtest, based on thresholding the estimated local false discovery rates.
Simulations and real-data examples demonstrate that, compared to existing
approaches, PRtest's careful handling of the non-null density can give a much
better fit in the tails of the mixture distribution which, in turn, can lead to
more realistic conclusions.Comment: 18 pages, 4 figures, 3 table
Frequentist and Bayesian measures of confidence via multiscale bootstrap for testing three regions
A new computation method of frequentist -values and Bayesian posterior
probabilities based on the bootstrap probability is discussed for the
multivariate normal model with unknown expectation parameter vector. The null
hypothesis is represented as an arbitrary-shaped region. We introduce new
parametric models for the scaling-law of bootstrap probability so that the
multiscale bootstrap method, which was designed for one-sided test, can also
computes confidence measures of two-sided test, extending applicability to a
wider class of hypotheses. Parameter estimation is improved by the two-step
multiscale bootstrap and also by including higher-order terms. Model selection
is important not only as a motivating application of our method, but also as an
essential ingredient in the method. A compromise between frequentist and
Bayesian is attempted by showing that the Bayesian posterior probability with
an noninformative prior is interpreted as a frequentist -value of
``zero-sided'' test
Energy-Aware Cloud Management through Progressive SLA Specification
Novel energy-aware cloud management methods dynamically reallocate
computation across geographically distributed data centers to leverage regional
electricity price and temperature differences. As a result, a managed VM may
suffer occasional downtimes. Current cloud providers only offer high
availability VMs, without enough flexibility to apply such energy-aware
management. In this paper we show how to analyse past traces of dynamic cloud
management actions based on electricity prices and temperatures to estimate VM
availability and price values. We propose a novel SLA specification approach
for offering VMs with different availability and price values guaranteed over
multiple SLAs to enable flexible energy-aware cloud management. We determine
the optimal number of such SLAs as well as their availability and price
guaranteed values. We evaluate our approach in a user SLA selection simulation
using Wikipedia and Grid'5000 workloads. The results show higher customer
conversion and 39% average energy savings per VM.Comment: 14 pages, conferenc
Research on nonlinear optical materials: an assessment. IV. Photorefractive and liquid crystal materials
This panel considered two separate subject areas: photorefractive materials used for nonlinear optics and liquid crystal materials used in light valves. Two related subjects were not considered due to lack of expertise on the panel: photorefractive materials used in light valves and liquid crystal materials used in nonlinear optics. Although the inclusion of a discussion of light valves by a panel on nonlinear optical materials at first seems odd, it is logical because light valves and photorefractive materials perform common functions
Lattice QCD study of a five-quark hadronic molecule
We compute the ground-state energies of a heavy-light K-Lambda like system as
a function of the relative distance r of the hadrons. The heavy quarks, one in
each hadron, are treated as static. Then, the energies give rise to an
adiabatic potential Va(r) which we use to study the structure of the five-quark
system. The simulation is based on an anisotropic and asymmetric lattice with
Wilson fermions. Energies are extracted from spectral density functions
obtained with the maximum entropy method. Our results are meant to give
qualitative insight: Using the resulting adiabatic potential in a Schroedinger
equation produces bound state wave functions which indicate that the ground
state of the five-quark system resembles a hadronic molecule, whereas the first
excited state, having a very small rms radius, is probably better described as
a five-quark cluster, or a pentaquark. We hypothesize that an all light-quark
pentaquark may not exist, but in the heavy-quark sector it might, albeit only
as an excited state.Comment: 11 pages, 15 figures, 4 table
A Nonparametric Method for the Derivation of α/β Ratios from the Effect of Fractionated Irradiations
Multifractionation isoeffect data are commonly analysed under the assumption that cell survival determines the observed tissue or tumour response, and that it follows a linear-quadratic dose dependence. The analysis is employed to derive the α/β ratios of the linear-quadratic dose dependence, and different methods have been developed for this purpose. A common method uses the so-called Fe plot. A more complex but also more rigorous method has been introduced by Lam et al. (1979). Their method, which is based on numerical optimization procedures, is generalized and somewhat simplified in the present study. Tumour-regrowth data are used to explain the nonparametric procedure which provides α/β ratios without the need to postulate analytical expressions for the relationship between cell survival and regrowth delay
Slip-velocity of large neutrally-buoyant particles in turbulent flows
We discuss possible definitions for a stochastic slip velocity that describes
the relative motion between large particles and a turbulent flow. This
definition is necessary because the slip velocity used in the standard drag
model fails when particle size falls within the inertial subrange of ambient
turbulence. We propose two definitions, selected in part due to their
simplicity: they do not require filtration of the fluid phase velocity field,
nor do they require the construction of conditional averages on particle
locations. A key benefit of this simplicity is that the stochastic slip
velocity proposed here can be calculated equally well for laboratory, field,
and numerical experiments. The stochastic slip velocity allows the definition
of a Reynolds number that should indicate whether large particles in turbulent
flow behave (a) as passive tracers; (b) as a linear filter of the velocity
field; or (c) as a nonlinear filter to the velocity field. We calculate the
value of stochastic slip for ellipsoidal and spherical particles (the size of
the Taylor microscale) measured in laboratory homogeneous isotropic turbulence.
The resulting Reynolds number is significantly higher than 1 for both particle
shapes, and velocity statistics show that particle motion is a complex
non-linear function of the fluid velocity. We further investigate the nonlinear
relationship by comparing the probability distribution of fluctuating
velocities for particle and fluid phases
- …