653,839 research outputs found

    Computational confirmation of scaling predictions for equilibrium polymers

    Full text link
    We report the results of extensive Dynamic Monte Carlo simulations of systems of self-assembled Equilibrium Polymers without rings in good solvent. Confirming recent theoretical predictions, the mean-chain length is found to scale as \Lav = \Lstar (\phi/\phistar)^\alpha \propto \phi^\alpha \exp(\delta E) with exponents αd=δd=1/(1+γ)0.46\alpha_d=\delta_d=1/(1+\gamma) \approx 0.46 and αs=[1+(γ1)/(νd1)]/20.60,δs=1/2\alpha_s = [1+(\gamma-1)/(\nu d -1)]/2 \approx 0.60, \delta_s=1/2 in the dilute and semi-dilute limits respectively. The average size of the micelles, as measured by the end-to-end distance and the radius of gyration, follows a very similar crossover scaling to that of conventional quenched polymer chains. In the semi-dilute regime, the chain size distribution is found to be exponential, crossing over to a Schultz-Zimm type distribution in the dilute limit. The very large size of our simulations (which involve mean chain lengths up to 5000, even at high polymer densities) allows also an accurate determination of the self-avoiding walk susceptibility exponent γ=1.165±0.01\gamma = 1.165 \pm 0.01.Comment: 6 pages, 4 figures, LATE

    Computational predictions of energy materials using density functional theory

    Get PDF
    In the search for new functional materials, quantum mechanics is an exciting starting point. The fundamental laws that govern the behaviour of electrons have the possibility, at the other end of the scale, to predict the performance of a material for a targeted application. In some cases, this is achievable using density functional theory (DFT). In this Review, we highlight DFT studies predicting energy-related materials that were subsequently confirmed experimentally. The attributes and limitations of DFT for the computational design of materials for lithium-ion batteries, hydrogen production and storage materials, superconductors, photovoltaics and thermoelectric materials are discussed. In the future, we expect that the accuracy of DFT-based methods will continue to improve and that growth in computing power will enable millions of materials to be virtually screened for specific applications. Thus, these examples represent a first glimpse of what may become a routine and integral step in materials discovery

    Theoretical uncertainties in sparticle mass predictions from computational tools

    Get PDF
    We estimate the current theoretical uncertainty in sparticle mass predictions by comparing several state-of-the-art computations within the minimal supersymmetric standard model (MSSM). We find that the theoretical uncertainty is comparable to the expected statistical errors from the Large Hadron Collider (LHC), and significantly larger than those expected from a future e+e- Linear Collider (LC). We quantify the theoretical uncertainty on relevant sparticle observables for both LHC and LC, and show that the value of the error is significantly dependent upon the supersymmetry (SUSY) breaking parameters. We also present the theoretical uncertainty induced in fundamental-scale SUSY breaking parameters when they are fitted from LHC measurements. Two regions of the SUSY parameter space where accurate predictions are particularly difficult are examined in detail: the large tan(beta) and focus point regimes.Comment: 22 pages, 6 figures; comment added pointing out that 2-loop QCD corrections to mt are incorrect in some of the programs investigated. We give the correct formul

    Extracting falsifiable predictions from sloppy models

    Full text link
    Successful predictions are among the most compelling validations of any model. Extracting falsifiable predictions from nonlinear multiparameter models is complicated by the fact that such models are commonly sloppy, possessing sensitivities to different parameter combinations that range over many decades. Here we discuss how sloppiness affects the sorts of data that best constrain model predictions, makes linear uncertainty approximations dangerous, and introduces computational difficulties in Monte-Carlo uncertainty analysis. We also present a useful test problem and suggest refinements to the standards by which models are communicated.Comment: 4 pages, 2 figures. Submitted to the Annals of the New York Academy of Sciences for publication in "Reverse Engineering Biological Networks: Opportunities and Challenges in Computational Methods for Pathway Inference

    Module networks revisited: computational assessment and prioritization of model predictions

    Full text link
    The solution of high-dimensional inference and prediction problems in computational biology is almost always a compromise between mathematical theory and practical constraints such as limited computational resources. As time progresses, computational power increases but well-established inference methods often remain locked in their initial suboptimal solution. We revisit the approach of Segal et al. (2003) to infer regulatory modules and their condition-specific regulators from gene expression data. In contrast to their direct optimization-based solution we use a more representative centroid-like solution extracted from an ensemble of possible statistical models to explain the data. The ensemble method automatically selects a subset of most informative genes and builds a quantitatively better model for them. Genes which cluster together in the majority of models produce functionally more coherent modules. Regulators which are consistently assigned to a module are more often supported by literature, but a single model always contains many regulator assignments not supported by the ensemble. Reliably detecting condition-specific or combinatorial regulation is particularly hard in a single optimum but can be achieved using ensemble averaging.Comment: 8 pages REVTeX, 6 figure

    Comparisons between harmonic balance and nonlinear output frequency response function in nonlinear system analysis

    Get PDF
    By using the Duffing oscillator as a case study, this paper shows that the harmonic components in the nonlinear system response to a sinusoidal input calculated using the Nonlinear Output Frequency Response Functions (NOFRFs) are one of the solutions obtained using the Harmonic Balance Method (HBM). A comparison of the performances of the two methods shows that the HBM can capture the well-known jump phenomenon, but is restricted by computational limits for some strongly nonlinear systems and can fail to provide accurate predictions for some harmonic components. Although the NOFRFs cannot capture the jump phenomenon, the method has few computational restrictions. For the nonlinear damping systems, the NOFRFs can give better predictions for all the harmonic components in the system response than the HBM even when the damping system is strongly nonlinear
    corecore