239,586 research outputs found
Efficient Monte Carlo Integration Using Boosted Decision Trees and Generative Deep Neural Networks
New machine learning based algorithms have been developed and tested for
Monte Carlo integration based on generative Boosted Decision Trees and Deep
Neural Networks. Both of these algorithms exhibit substantial improvements
compared to existing algorithms for non-factorizable integrands in terms of the
achievable integration precision for a given number of target function
evaluations. Large scale Monte Carlo generation of complex collider physics
processes with improved efficiency can be achieved by implementing these
algorithms into commonly used matrix element Monte Carlo generators once their
robustness is demonstrated and performance validated for the relevant classes
of matrix elements
Statistical Network Analysis for Functional MRI: Summary Networks and Group Comparisons
Comparing weighted networks in neuroscience is hard, because the topological
properties of a given network are necessarily dependent on the number of edges
of that network. This problem arises in the analysis of both weighted and
unweighted networks. The term density is often used in this context, in order
to refer to the mean edge weight of a weighted network, or to the number of
edges in an unweighted one. Comparing families of networks is therefore
statistically difficult because differences in topology are necessarily
associated with differences in density. In this review paper, we consider this
problem from two different perspectives, which include (i) the construction of
summary networks, such as how to compute and visualize the mean network from a
sample of network-valued data points; and (ii) how to test for topological
differences, when two families of networks also exhibit significant differences
in density. In the first instance, we show that the issue of summarizing a
family of networks can be conducted by adopting a mass-univariate approach,
which produces a statistical parametric network (SPN). In the second part of
this review, we then highlight the inherent problems associated with the
comparison of topological functions of families of networks that differ in
density. In particular, we show that a wide range of topological summaries,
such as global efficiency and network modularity are highly sensitive to
differences in density. Moreover, these problems are not restricted to
unweighted metrics, as we demonstrate that the same issues remain present when
considering the weighted versions of these metrics. We conclude by encouraging
caution, when reporting such statistical comparisons, and by emphasizing the
importance of constructing summary networks.Comment: 16 pages, 5 figure
Testing the order of a model
This paper deals with order identification for nested models in the i.i.d.
framework. We study the asymptotic efficiency of two generalized likelihood
ratio tests of the order. They are based on two estimators which are proved to
be strongly consistent. A version of Stein's lemma yields an optimal
underestimation error exponent. The lemma also implies that the overestimation
error exponent is necessarily trivial. Our tests admit nontrivial
underestimation error exponents. The optimal underestimation error exponent is
achieved in some situations. The overestimation error can decay exponentially
with respect to a positive power of the number of observations. These results
are proved under mild assumptions by relating the underestimation (resp.
overestimation) error to large (resp. moderate) deviations of the
log-likelihood process. In particular, it is not necessary that the classical
Cram\'{e}r condition be satisfied; namely, the -densities are not
required to admit every exponential moment. Three benchmark examples with
specific difficulties (location mixture of normal distributions, abrupt changes
and various regressions) are detailed so as to illustrate the generality of our
results.Comment: Published at http://dx.doi.org/10.1214/009053606000000344 in the
Annals of Statistics (http://www.imstat.org/aos/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Formation rates of Dark Matter Haloes
We derive an estimate of the rate of formation of dark matter halos per unit
volume as a function of the halo mass and redshift of formation. Analytical
estimates of the number density of dark matter halos are useful in modeling
several cosmological phenomena. We use the excursion set formalism for
computing the formation rate of dark matter halos. We use an approach that
allows us to differentiate between major and minor mergers, as this is a
pertinent issue for semi-analytic models of galaxy formation. We compute the
formation rate for the Press-Schechter and the Sheth-Tormen mass function. We
show that the formation rate computed in this manner is positive at all scales.
We comment on the Sasaki formalism where negative halo formation rates are
obtained. Our estimates compare very well with N-Body simulations for a variety
of models. We also discuss the halo survival probability and the formation
redshift distributions using our method.Comment: 30 pages, 9 figure
Climate Change and the Stability of Water Allocation Agreements
We analyse agreements on river water allocation between riparian countries. Besides being efficient, water allocation agreements need to be stable in order to be effective in increasing the efficiency of water use. In this paper we assess the stability of water allocation agreements using a game theoretic model. We consider the effects of climate change and the choice of a sharing rule on stability. Our results show that a decrease in mean river flow decreases the stability of an agreement, while an increased variance can have a positive or a negative effect on stability. An agreement where the downstream country is allocated a fixed amount of water has the lowest stability compared to other sharing rules. These results hold for both constant and flexible non-water transfer
Customer-oriented risk assessment in Network Utilities
For companies that distribute services such as telecommunications, water, energy, gas, etc., quality perceived by the customers has a strong impact on the fulfillment of financial goals, positively increasing the demand and negatively increasing the risk of customer churn (loss of customers). Failures by these companies may cause customer affection in a massive way, augmenting the intention to leave the company. Therefore, maintenance performance and specifically service reliability has a strong influence on financial goals. This paper proposes a methodology to evaluate the contribution of the maintenance department in economic terms, based on service unreliability by network failures. The developed methodology aims to provide an analysis of failures to facilitate decision making about maintenance (preventive/predictive and corrective) costs versus negative impacts in end-customer invoicing based on the probability of losing customers. Survival analysis of recurrent failures with the General Renewal Process distribution is used for this novel purpose with the intention to be applied as a standard procedure to calculate the expected maintenance financial impact, for a given period of time. Also, geographical areas of coverage are distinguished, enabling the comparison of different technical or management alternatives. Two case studies in a telecommunications services company are presented in order to illustrate the applicability of the methodology
- …