653,839 research outputs found
Computational confirmation of scaling predictions for equilibrium polymers
We report the results of extensive Dynamic Monte Carlo simulations of systems
of self-assembled Equilibrium Polymers without rings in good solvent.
Confirming recent theoretical predictions, the mean-chain length is found to
scale as \Lav = \Lstar (\phi/\phistar)^\alpha \propto \phi^\alpha \exp(\delta
E) with exponents and in the dilute and
semi-dilute limits respectively. The average size of the micelles, as measured
by the end-to-end distance and the radius of gyration, follows a very similar
crossover scaling to that of conventional quenched polymer chains. In the
semi-dilute regime, the chain size distribution is found to be exponential,
crossing over to a Schultz-Zimm type distribution in the dilute limit. The very
large size of our simulations (which involve mean chain lengths up to 5000,
even at high polymer densities) allows also an accurate determination of the
self-avoiding walk susceptibility exponent .Comment: 6 pages, 4 figures, LATE
Computational predictions of energy materials using density functional theory
In the search for new functional materials, quantum mechanics is an exciting starting point. The fundamental laws that govern the behaviour of electrons have the possibility, at the other end of the scale, to predict the performance of a material for a targeted application. In some cases, this is achievable using density functional theory (DFT). In this Review, we highlight DFT studies predicting energy-related materials that were subsequently confirmed experimentally. The attributes and limitations of DFT for the computational design of materials for lithium-ion batteries, hydrogen production and storage materials, superconductors, photovoltaics and thermoelectric materials are discussed. In the future, we expect that the accuracy of DFT-based methods will continue to improve and that growth in computing power will enable millions of materials to be virtually screened for specific applications. Thus, these examples represent a first glimpse of what may become a routine and integral step in materials discovery
Theoretical uncertainties in sparticle mass predictions from computational tools
We estimate the current theoretical uncertainty in sparticle mass predictions
by comparing several state-of-the-art computations within the minimal
supersymmetric standard model (MSSM). We find that the theoretical uncertainty
is comparable to the expected statistical errors from the Large Hadron Collider
(LHC), and significantly larger than those expected from a future e+e- Linear
Collider (LC). We quantify the theoretical uncertainty on relevant sparticle
observables for both LHC and LC, and show that the value of the error is
significantly dependent upon the supersymmetry (SUSY) breaking parameters. We
also present the theoretical uncertainty induced in fundamental-scale SUSY
breaking parameters when they are fitted from LHC measurements. Two regions of
the SUSY parameter space where accurate predictions are particularly difficult
are examined in detail: the large tan(beta) and focus point regimes.Comment: 22 pages, 6 figures; comment added pointing out that 2-loop QCD
corrections to mt are incorrect in some of the programs investigated. We give
the correct formul
Extracting falsifiable predictions from sloppy models
Successful predictions are among the most compelling validations of any
model. Extracting falsifiable predictions from nonlinear multiparameter models
is complicated by the fact that such models are commonly sloppy, possessing
sensitivities to different parameter combinations that range over many decades.
Here we discuss how sloppiness affects the sorts of data that best constrain
model predictions, makes linear uncertainty approximations dangerous, and
introduces computational difficulties in Monte-Carlo uncertainty analysis. We
also present a useful test problem and suggest refinements to the standards by
which models are communicated.Comment: 4 pages, 2 figures. Submitted to the Annals of the New York Academy
of Sciences for publication in "Reverse Engineering Biological Networks:
Opportunities and Challenges in Computational Methods for Pathway Inference
Module networks revisited: computational assessment and prioritization of model predictions
The solution of high-dimensional inference and prediction problems in
computational biology is almost always a compromise between mathematical theory
and practical constraints such as limited computational resources. As time
progresses, computational power increases but well-established inference
methods often remain locked in their initial suboptimal solution. We revisit
the approach of Segal et al. (2003) to infer regulatory modules and their
condition-specific regulators from gene expression data. In contrast to their
direct optimization-based solution we use a more representative centroid-like
solution extracted from an ensemble of possible statistical models to explain
the data. The ensemble method automatically selects a subset of most
informative genes and builds a quantitatively better model for them. Genes
which cluster together in the majority of models produce functionally more
coherent modules. Regulators which are consistently assigned to a module are
more often supported by literature, but a single model always contains many
regulator assignments not supported by the ensemble. Reliably detecting
condition-specific or combinatorial regulation is particularly hard in a single
optimum but can be achieved using ensemble averaging.Comment: 8 pages REVTeX, 6 figure
Recommended from our members
Universality of Bayesian Predictions
Given the sequential update nature of Bayes rule, Bayesian methods find natural application to prediction problems. Advances in computational methods allow to routinely use Bayesian methods in econometrics. Hence, there is a strong case for feasible predictions in a Bayesian framework. This paper studies the theoretical properties of Bayesian predictions and shows that under minimal conditions we can derive finite sample bounds for the loss incurred using
Bayesian predictions under the Kullback-Leibler divergence. In particular, the concept of universality of predictions is discussed and universality is established for Bayesian predictions in a variety of settings. These include predictions under almost arbitrary loss functions, model
averaging, predictions in a non stationary environment and under model miss-specification.
Given the possibility of regime switches and multiple breaks in economic series, as well as the
need to choose among different forecasting models, which may inevitably be miss-specified, the
finite sample results derived here are of interest to economic and financial forecasting
Comparisons between harmonic balance and nonlinear output frequency response function in nonlinear system analysis
By using the Duffing oscillator as a case study, this paper shows that the harmonic components in the nonlinear system response to a sinusoidal input calculated using the Nonlinear Output Frequency Response Functions (NOFRFs) are one of the solutions obtained using the Harmonic Balance Method (HBM). A comparison of the performances of the two methods shows that the HBM can capture the well-known jump phenomenon, but is restricted by computational limits for some strongly nonlinear systems and can fail to provide accurate predictions for some harmonic components. Although the NOFRFs cannot capture the jump phenomenon, the method has few computational restrictions. For the nonlinear damping systems, the NOFRFs can give better predictions for all the harmonic components in the system response than the HBM even when the damping system is strongly nonlinear
- …
