125,277 research outputs found

    Transport Coefficients from Large Deviation Functions

    Full text link
    We describe a method for computing transport coefficients from the direct evaluation of large deviation function. This method is general, relying on only equilibrium fluctuations, and is statistically efficient, employing trajectory based importance sampling. Equilibrium fluctuations of molecular currents are characterized by their large deviation functions, which is a scaled cumulant generating function analogous to the free energy. A diffusion Monte Carlo algorithm is used to evaluate the large deviation functions, from which arbitrary transport coefficients are derivable. We find significant statistical improvement over traditional Green-Kubo based calculations. The systematic and statistical errors of this method are analyzed in the context of specific transport coefficient calculations, including the shear viscosity, interfacial friction coefficient, and thermal conductivity.Comment: 11 pages, 5 figure

    Transport Coefficients from Large Deviation Functions

    Full text link
    We describe a method for computing transport coefficients from the direct evaluation of large deviation function. This method is general, relying on only equilibrium fluctuations, and is statistically efficient, employing trajectory based importance sampling. Equilibrium fluctuations of molecular currents are characterized by their large deviation functions, which is a scaled cumulant generating function analogous to the free energy. A diffusion Monte Carlo algorithm is used to evaluate the large deviation functions, from which arbitrary transport coefficients are derivable. We find significant statistical improvement over traditional Green-Kubo based calculations. The systematic and statistical errors of this method are analyzed in the context of specific transport coefficient calculations, including the shear viscosity, interfacial friction coefficient, and thermal conductivity.Comment: 11 pages, 5 figure

    Racing Multi-Objective Selection Probabilities

    Get PDF
    In the context of Noisy Multi-Objective Optimization, dealing with uncertainties requires the decision maker to define some preferences about how to handle them, through some statistics (e.g., mean, median) to be used to evaluate the qualities of the solutions, and define the corresponding Pareto set. Approximating these statistics requires repeated samplings of the population, drastically increasing the overall computational cost. To tackle this issue, this paper proposes to directly estimate the probability of each individual to be selected, using some Hoeffding races to dynamically assign the estimation budget during the selection step. The proposed racing approach is validated against static budget approaches with NSGA-II on noisy versions of the ZDT benchmark functions

    Broad Histogram: An Overview

    Full text link
    The Broad Histogram is a method allowing the direct calculation of the energy degeneracy g(E)g(E). This quantity is independent of thermodynamic concepts such as thermal equilibrium. It only depends on the distribution of allowed (micro) states along the energy axis, but not on the energy changes between the system and its environment. Once one has obtained g(E)g(E), no further effort is needed in order to consider different environment conditions, for instance, different temperatures, for the same system. The method is based on the exact relation between g(E)g(E) and the microcanonical averages of certain macroscopic quantities NupN^{\rm up} and NdnN^{\rm dn}. For an application to a particular problem, one needs to choose an adequate instrument in order to determine the averages and and , as functions of energy. Replacing the usual fixed-temperature canonical by the fixed-energy microcanonical ensemble, new subtle concepts emerge. The temperature, for instance, is no longer an external parameter controlled by the user, all canonical averages being functions of this parameter. Instead, the microcanonical temperature Tm(E)T_{m}(E) is a function of energy defined from g(E)g(E) itself, being thus an {\bf internal} (environment independent) characteristic of the system. Accordingly, all microcanonical averages are functions of EE. The present text is an overview of the method. Some features of the microcanonical ensemble are also discussed, as well as some clues towards the definition of efficient Monte Carlo microcanonical sampling rules.Comment: 32 pages, tex, 3 PS figure

    Precipitation and latent heating distributions from satellite passive microwave radiometry. Part I: improved method and uncertainties

    Get PDF
    A revised Bayesian algorithm for estimating surface rain rate, convective rain proportion, and latent heating profiles from satellite-borne passive microwave radiometer observations over ocean backgrounds is described. The algorithm searches a large database of cloud-radiative model simulations to find cloud profiles that are radiatively consistent with a given set of microwave radiance measurements. The properties of these radiatively consistent profiles are then composited to obtain best estimates of the observed properties. The revised algorithm is supported by an expanded and more physically consistent database of cloud-radiative model simulations. The algorithm also features a better quantification of the convective and nonconvective contributions to total rainfall, a new geographic database, and an improved representation of background radiances in rain-free regions. Bias and random error estimates are derived from applications of the algorithm to synthetic radiance data, based upon a subset of cloud-resolving model simulations, and from the Bayesian formulation itself. Synthetic rain-rate and latent heating estimates exhibit a trend of high (low) bias for low (high) retrieved values. The Bayesian estimates of random error are propagated to represent errors at coarser time and space resolutions, based upon applications of the algorithm to TRMM Microwave Imager (TMI) data. Errors in TMI instantaneous rain-rate estimates at 0.5°-resolution range from approximately 50% at 1 mm h−1 to 20% at 14 mm h−1. Errors in collocated spaceborne radar rain-rate estimates are roughly 50%–80% of the TMI errors at this resolution. The estimated algorithm random error in TMI rain rates at monthly, 2.5° resolution is relatively small (less than 6% at 5 mm day−1) in comparison with the random error resulting from infrequent satellite temporal sampling (8%–35% at the same rain rate). Percentage errors resulting from sampling decrease with increasing rain rate, and sampling errors in latent heating rates follow the same trend. Averaging over 3 months reduces sampling errors in rain rates to 6%–15% at 5 mm day−1, with proportionate reductions in latent heating sampling errors

    Errors in particle tracking velocimetry with high-speed cameras

    Full text link
    Velocity errors in particle tracking velocimetry (PTV) are studied. When using high-speed video cameras, the velocity error may increase at a high camera frame rate. This increase in velocity error is due to particle-position uncertainty, which is one of two sources of velocity errors studied here. The other source of error is particle acceleration, which has the opposite trend of diminishing at higher frame rates. Both kinds of errors can propagate into quantities calculated from velocity, such as the kinetic temperature of particles or correlation functions. As demonstrated in a dusty plasma experiment, the kinetic temperature of particles has no unique value when measured using PTV, but depends on the sampling time interval or frame rate. It is also shown that an artifact appears in an autocorrelation function computed from particle positions and velocities, and it becomes more severe when a small sampling-time interval is used. Schemes to reduce these errors are demonstrated.Comment: 6 pages, 5 figures, Review of Scientific Instruments, 2011 (In Press

    Nonparametric tests of structure for high angular resolution diffusion imaging in Q-space

    Full text link
    High angular resolution diffusion imaging data is the observed characteristic function for the local diffusion of water molecules in tissue. This data is used to infer structural information in brain imaging. Nonparametric scalar measures are proposed to summarize such data, and to locally characterize spatial features of the diffusion probability density function (PDF), relying on the geometry of the characteristic function. Summary statistics are defined so that their distributions are, to first-order, both independent of nuisance parameters and also analytically tractable. The dominant direction of the diffusion at a spatial location (voxel) is determined, and a new set of axes are introduced in Fourier space. Variation quantified in these axes determines the local spatial properties of the diffusion density. Nonparametric hypothesis tests for determining whether the diffusion is unimodal, isotropic or multi-modal are proposed. More subtle characteristics of white-matter microstructure, such as the degree of anisotropy of the PDF and symmetry compared with a variety of asymmetric PDF alternatives, may be ascertained directly in the Fourier domain without parametric assumptions on the form of the diffusion PDF. We simulate a set of diffusion processes and characterize their local properties using the newly introduced summaries. We show how complex white-matter structures across multiple voxels exhibit clear ellipsoidal and asymmetric structure in simulation, and assess the performance of the statistics in clinically-acquired magnetic resonance imaging data.Comment: Published in at http://dx.doi.org/10.1214/10-AOAS441 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org
    corecore