2,756 research outputs found

    Steady-state simulations using weighted ensemble path sampling

    Full text link
    We extend the weighted ensemble (WE) path sampling method to perform rigorous statistical sampling for systems at steady state. The straightforward steady-state implementation of WE is directly practical for simple landscapes, but not when significant metastable intermediates states are present. We therefore develop an enhanced WE scheme, building on existing ideas, which accelerates attainment of steady state in complex systems. We apply both WE approaches to several model systems confirming their correctness and efficiency by comparison with brute-force results. The enhanced version is significantly faster than the brute force and straightforward WE for systems with WE bins that accurately reflect the reaction coordinate(s). The new WE methods can also be applied to equilibrium sampling, since equilibrium is a steady state

    A deductive statistical mechanics approach for granular matter

    Get PDF
    We introduce a deductive statistical mechanics approach for granular materials which is formally built from few realistic physical assumptions. The main finding is an universal behavior for the distribution of the density fluctuations. Such a distribution is the equivalent of the Maxwell-Boltzmann's distribution in the kinetic theory of gasses. The comparison with a very extensive set of experimental and simulation data for packings of monosized spherical grains, reveals a remarkably good quantitative agreement with the theoretical predictions for the density fluctuations both at the grain level and at the global system level. Such agreement is robust over a broad range of packing fractions and it is observed in several distinct systems prepared by using different methods. The equilibrium distributions are characterized by only one parameter (kk) which is a quantity very sensitive to changes in the structural organization. The thermodynamical equivalent of kk and its relation with the `granular temperature' are also discussed.Comment: 15 pages, 6 figure

    Single-shot single-gate RF spin readout in silicon

    Full text link
    For solid-state spin qubits, single-gate RF readout can help minimise the number of gates required for scale-up to many qubits since the readout sensor can integrate into the existing gates required to manipulate the qubits (Veldhorst 2017, Pakkiam 2018). However, a key requirement for a scalable quantum computer is that we must be capable of resolving the qubit state within single-shot, that is, a single measurement (DiVincenzo 2000). Here we demonstrate single-gate, single-shot readout of a singlet-triplet spin state in silicon, with an average readout fidelity of 82.9%82.9\% at a 3.3 kHz3.3~\text{kHz} measurement bandwidth. We use this technique to measure a triplet TT_- to singlet S0S_0 relaxation time of 0.62 ms0.62~\text{ms} in precision donor quantum dots in silicon. We also show that the use of RF readout does not impact the maximum readout time at zero detuning limited by the S0S_0 to TT_- decay, which remained at approximately 2 ms2~\text{ms}. This establishes single-gate sensing as a viable readout method for spin qubits

    Tricritical Points in Random Combinatorics: the (2+p)-SAT case

    Full text link
    The (2+p)-Satisfiability (SAT) problem interpolates between different classes of complexity theory and is believed to be of basic interest in understanding the onset of typical case complexity in random combinatorics. In this paper, a tricritical point in the phase diagram of the random 2+p2+p-SAT problem is analytically computed using the replica approach and found to lie in the range 2/5p00.4162/5 \le p_0 \le 0.416. These bounds on p0p_0 are in agreement with previous numerical simulations and rigorous results.Comment: 7 pages, 1 figure, RevTeX, to appear in J.Phys.

    Comparison of 2.3 & 5 mega pixel (MP) resolution monitors when detecting mammography image blurring

    Get PDF
    Background - Image blurring in Full Field Digital Mammography (FFDM) is reported to be a problem within many UK breast screening units resulting in significant proportion of technical repeats/recalls. Our study investigates monitors of differing pixel resolution, and whether there is a difference in blurring detection between a 2.3 MP technical review monitor and a 5MP standard reporting monitor. Methods - Simulation software was created to induce different magnitudes of blur on 20 artifact free FFDM screening images. 120 blurred and non-blurred images were randomized and displayed on the 2.3 and 5MP monitors; they were reviewed by 28 trained observers. Monitors were calibrated to the DICOM Grayscale Standard Display Function. T-test was used to determine whether significant differences exist in blurring detection between the monitors. Results - The blurring detection rate on the 2.3MP monitor for 0.2, 0.4, 0.6, 0.8 and 1 mm blur was 46, 59, 66, 77and 78% respectively; and on the 5MP monitor 44, 70, 83 , 96 and 98%. All the non-motion images were identified correctly. A statistical difference (p <0.01) in the blurring detection rate between the two monitors was demonstrated. Conclusions - Given the results of this study and knowing that monitors as low as 1 MP are used in clinical practice, we speculate that technical recall/repeat rates because of blurring could be reduced if higher resolution monitors are used for technical review at the time of imaging. Further work is needed to determine monitor minimum specification for visual blurring detection

    Fast computation by block permanents of cumulative distribution functions of order statistics from several populations

    Full text link
    The joint cumulative distribution function for order statistics arising from several different populations is given in terms of the distribution function of the populations. The computational cost of the formula in the case of two populations is still exponential in the worst case, but it is a dramatic improvement compared to the general formula by Bapat and Beg. In the case when only the joint distribution function of a subset of the order statistics of fixed size is needed, the complexity is polynomial, for the case of two populations.Comment: 21 pages, 3 figure

    Direct Observation of Sub-Poissonian Number Statistics in a Degenerate Bose Gas

    Full text link
    We report the direct observation of sub-Poissonian number fluctuation for a degenerate Bose gas confined in an optical trap. Reduction of number fluctuations below the Poissonian limit is observed for average numbers that range from 300 to 60 atoms.Comment: 5 pages, 4 figure

    The Hubble series: Convergence properties and redshift variables

    Full text link
    In cosmography, cosmokinetics, and cosmology it is quite common to encounter physical quantities expanded as a Taylor series in the cosmological redshift z. Perhaps the most well-known exemplar of this phenomenon is the Hubble relation between distance and redshift. However, we now have considerable high-z data available, for instance we have supernova data at least back to redshift z=1.75. This opens up the theoretical question as to whether or not the Hubble series (or more generally any series expansion based on the z-redshift) actually converges for large redshift? Based on a combination of mathematical and physical reasoning, we argue that the radius of convergence of any series expansion in z is less than or equal to 1, and that z-based expansions must break down for z>1, corresponding to a universe less than half its current size. Furthermore, we shall argue on theoretical grounds for the utility of an improved parameterization y=z/(1+z). In terms of the y-redshift we again argue that the radius of convergence of any series expansion in y is less than or equal to 1, so that y-based expansions are likely to be good all the way back to the big bang y=1, but that y-based expansions must break down for y<-1, now corresponding to a universe more than twice its current size.Comment: 15 pages, 2 figures, accepted for publication in Classical and Quantum Gravit

    Explicit kinetic heterogeneity: mechanistic models for interpretation of labeling data of heterogeneous cell populations

    Get PDF
    Estimation of division and death rates of lymphocytes in different conditions is vital for quantitative understanding of the immune system. Deuterium, in the form of deuterated glucose or heavy water, can be used to measure rates of proliferation and death of lymphocytes in vivo. Inferring these rates from labeling and delabeling curves has been subject to considerable debate with different groups suggesting different mathematical models for that purpose. We show that the three models that are most commonly used are in fact mathematically identical and differ only in their interpretation of the estimated parameters. By extending these previous models, we here propose a more mechanistic approach for the analysis of data from deuterium labeling experiments. We construct a model of "kinetic heterogeneity" in which the total cell population consists of many sub-populations with different rates of cell turnover. In this model, for a given distribution of the rates of turnover, the predicted fraction of labeled DNA accumulated and lost can be calculated. Our model reproduces several previously made experimental observations, such as a negative correlation between the length of the labeling period and the rate at which labeled DNA is lost after label cessation. We demonstrate the reliability of the new explicit kinetic heterogeneity model by applying it to artificially generated datasets, and illustrate its usefulness by fitting experimental data. In contrast to previous models, the explicit kinetic heterogeneity model 1) provides a mechanistic way of interpreting labeling data; 2) allows for a non-exponential loss of labeled cells during delabeling, and 3) can be used to describe data with variable labeling length
    corecore