2,387 research outputs found

    Thermodynamic Properties of Generalized Exclusion Statistics

    Full text link
    We analytically calculate some thermodynamic quantities of an ideal gg-on gas obeying generalized exclusion statistics. We show that the specific heat of a gg-on gas (g0g \neq 0) vanishes linearly in any dimension as T0T \to 0 when the particle number is conserved and exhibits an interesting dual symmetry that relates the particle-statistics at gg to the hole-statistics at 1/g1/g at low temperatures. We derive the complete solution for the cluster coefficients bl(g)b_l(g) as a function of Haldane's statistical interaction gg in DD dimensions. We also find that the cluster coefficients bl(g)b_l(g) and the virial coefficients al(g)a_l(g) are exactly mirror symmetric (ll=odd) or antisymmetric (ll=even) about g=1/2g=1/2. In two dimensions, we completely determine the closed forms about the cluster and the virial coefficients of the generalized exclusion statistics, which exactly agree with the virial coefficients of an anyon gas of linear energies. We show that the gg-on gas with zero chemical potential shows thermodynamic properties similar to the photon statistics. We discuss some physical implications of our results.Comment: 24 pages, Revtex, Corrected typo

    Comparison of dogs treated for primary immune-mediated hemolytic anemia in Tuscany, Italy and Texas, USA

    Get PDF
    This retrospective study compared clinical characteristics between dogs treated for IMHA by veterinary teaching hospitals in Tuscany, Italy and Texas, USA between 2010 and 2018

    Tests of Bayesian Model Selection Techniques for Gravitational Wave Astronomy

    Full text link
    The analysis of gravitational wave data involves many model selection problems. The most important example is the detection problem of selecting between the data being consistent with instrument noise alone, or instrument noise and a gravitational wave signal. The analysis of data from ground based gravitational wave detectors is mostly conducted using classical statistics, and methods such as the Neyman-Pearson criteria are used for model selection. Future space based detectors, such as the \emph{Laser Interferometer Space Antenna} (LISA), are expected to produced rich data streams containing the signals from many millions of sources. Determining the number of sources that are resolvable, and the most appropriate description of each source poses a challenging model selection problem that may best be addressed in a Bayesian framework. An important class of LISA sources are the millions of low-mass binary systems within our own galaxy, tens of thousands of which will be detectable. Not only are the number of sources unknown, but so are the number of parameters required to model the waveforms. For example, a significant subset of the resolvable galactic binaries will exhibit orbital frequency evolution, while a smaller number will have measurable eccentricity. In the Bayesian approach to model selection one needs to compute the Bayes factor between competing models. Here we explore various methods for computing Bayes factors in the context of determining which galactic binaries have measurable frequency evolution. The methods explored include a Reverse Jump Markov Chain Monte Carlo (RJMCMC) algorithm, Savage-Dickie density ratios, the Schwarz-Bayes Information Criterion (BIC), and the Laplace approximation to the model evidence. We find good agreement between all of the approaches.Comment: 11 pages, 6 figure

    Present and future evidence for evolving dark energy

    Get PDF
    We compute the Bayesian evidences for one- and two-parameter models of evolving dark energy, and compare them to the evidence for a cosmological constant, using current data from Type Ia supernova, baryon acoustic oscillations, and the cosmic microwave background. We use only distance information, ignoring dark energy perturbations. We find that, under various priors on the dark energy parameters, LambdaCDM is currently favoured as compared to the dark energy models. We consider the parameter constraints that arise under Bayesian model averaging, and discuss the implication of our results for future dark energy projects seeking to detect dark energy evolution. The model selection approach complements and extends the figure-of-merit approach of the Dark Energy Task Force in assessing future experiments, and suggests a significantly-modified interpretation of that statistic.Comment: 10 pages RevTex4, 3 figures included. Minor changes to match version accepted by PR

    Direct reconstruction of the quintessence potential

    Get PDF
    We describe an algorithm which directly determines the quintessence potential from observational data, without using an equation of state parametrisation. The strategy is to numerically determine observational quantities as a function of the expansion coefficients of the quintessence potential, which are then constrained using a likelihood approach. We further impose a model selection criterion, the Bayesian Information Criterion, to determine the appropriate level of the potential expansion. In addition to the potential parameters, the present-day quintessence field velocity is kept as a free parameter. Our investigation contains unusual model types, including a scalar field moving on a flat potential, or in an uphill direction, and is general enough to permit oscillating quintessence field models. We apply our method to the `gold' Type Ia supernovae sample of Riess et al. (2004), confirming the pure cosmological constant model as the best description of current supernovae luminosity-redshift data. Our method is optimal for extracting quintessence parameters from future data.Comment: 9 pages RevTeX4 with lots of incorporated figure

    Characterizing the role of the pre‐SMA in the control of speed/accuracy trade‐off with directed functional connectivity mapping and multiple solution reduction

    Full text link
    Several plausible theories of the neural implementation of speed/accuracy trade‐off (SAT), the phenomenon in which individuals may alternately emphasize speed or accuracy during the performance of cognitive tasks, have been proposed, and multiple lines of evidence point to the involvement of the pre‐supplemental motor area (pre‐SMA). However, as the nature and directionality of the pre‐SMA’s functional connections to other regions involved in cognitive control and task processing are not known, its precise role in the top‐down control of SAT remains unclear. Although recent advances in cross‐sectional path modeling provide a promising way of characterizing these connections, such models are limited by their tendency to produce multiple equivalent solutions. In a sample of healthy adults (N = 18), the current study uses the novel approach of Group Iterative Multiple Model Estimation for Multiple Solutions (GIMME‐MS) to assess directed functional connections between the pre‐SMA, other regions previously linked to control of SAT, and regions putatively involved in evidence accumulation for the decision task. Results reveal a primary role of the pre‐SMA for modulating activity in regions involved in the decision process but suggest that this region receives top‐down input from the DLPFC. Findings also demonstrate the utility of GIMME‐MS and solution‐reduction methods for obtaining valid directional inferences from connectivity path models.Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/149347/1/hbm24493.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/149347/2/hbm24493_am.pd

    On the Gravitational Collapse of a Gas Cloud in Presence of Bulk Viscosity

    Full text link
    We analyze the effects induced by the bulk viscosity on the dynamics associated to the extreme gravitational collapse. Aim of the work is to investigate whether the presence of viscous corrections to the evolution of a collapsing gas cloud influence the fragmentation process. To this end we study the dynamics of a uniform and spherically symmetric cloud with corrections due to the negative pressure contribution associated to the bulk viscosity phenomenology. Within the framework of a Newtonian approach (whose range of validity is outlined), we extend to the viscous case either the Lagrangian, either the Eulerian motion of the system and we treat the asymptotic evolution in correspondence to a viscosity coefficient of the form ζ=ζ0ρnu\zeta=\zeta_0 \rho^{nu} (ρ\rho being the cloud density and ζ0=const.\zeta_0=const.). We show how, in the adiabatic-like behavior of the gas (i.e. when the politropic index takes values 4/3<γ5/34/3<\gamma\leq5/3), density contrasts acquire, asymptotically, a vanishing behavior which prevents the formation of sub-structures. We can conclude that in the adiabatic-like collapse the top down mechanism of structures formation is suppressed as soon as enough strong viscous effects are taken into account. Such a feature is not present in the isothermal-like (i.e. 1γ<4/31\leq\gamma<4/3) collapse because the sub-structures formation is yet present and outlines the same behavior as in the non-viscous case. We emphasize that in the adiabatic-like collapse the bulk viscosity is also responsible for the appearance of a threshold scale beyond which perturbations begin to increase.Comment: 13 pages, no figur

    Exact steady-state velocity of ratchets driven by random sequential adsorption

    Full text link
    We solve the problem of discrete translocation of a polymer through a pore, driven by the irreversible, random sequential adsorption of particles on one side of the pore. Although the kinetics of the wall motion and the deposition are coupled, we find the exact steady-state distribution for the gap between the wall and the nearest deposited particle. This result enables us to construct the mean translocation velocity demonstrating that translocation is faster when the adsorbing particles are smaller. Monte-Carlo simulations also show that smaller particles gives less dispersion in the ratcheted motion. We also define and compare the relative efficiencies of ratcheting by deposition of particles with different sizes and we describe an associated "zone-refinement" process.Comment: 11 pages, 4 figures New asymptotic result for low chaperone density added. Exact translocation velocity is proportional to (chaperone density)^(1/3

    Prediction and explanation in the multiverse

    Get PDF
    Probabilities in the multiverse can be calculated by assuming that we are typical representatives in a given reference class. But is this class well defined? What should be included in the ensemble in which we are supposed to be typical? There is a widespread belief that this question is inherently vague, and that there are various possible choices for the types of reference objects which should be counted in. Here we argue that the ``ideal'' reference class (for the purpose of making predictions) can be defined unambiguously in a rather precise way, as the set of all observers with identical information content. When the observers in a given class perform an experiment, the class branches into subclasses who learn different information from the outcome of that experiment. The probabilities for the different outcomes are defined as the relative numbers of observers in each subclass. For practical purposes, wider reference classes can be used, where we trace over all information which is uncorrelated to the outcome of the experiment, or whose correlation with it is beyond our current understanding. We argue that, once we have gathered all practically available evidence, the optimal strategy for making predictions is to consider ourselves typical in any reference class we belong to, unless we have evidence to the contrary. In the latter case, the class must be correspondingly narrowed.Comment: Minor clarifications adde
    corecore