71 research outputs found

    On the non-ergodicity of the Swendsen-Wang-Kotecky algorithm on the kagome lattice

    Get PDF
    We study the properties of the Wang-Swendsen-Kotecky cluster Monte Carlo algorithm for simulating the 3-state kagome-lattice Potts antiferromagnet at zero temperature. We prove that this algorithm is not ergodic for symmetric subsets of the kagome lattice with fully periodic boundary conditions: given an initial configuration, not all configurations are accessible via Monte Carlo steps. The same conclusion holds for single-site dynamics.Comment: Latex2e. 22 pages. Contains 11 figures using pstricks package. Uses iopart.sty. Final version accepted in journa

    Metastates in mean-field models with random external fields generated by Markov chains

    Full text link
    We extend the construction by Kuelske and Iacobelli of metastates in finite-state mean-field models in independent disorder to situations where the local disorder terms are are a sample of an external ergodic Markov chain in equilibrium. We show that for non-degenerate Markov chains, the structure of the theorems is analogous to the case of i.i.d. variables when the limiting weights in the metastate are expressed with the aid of a CLT for the occupation time measure of the chain. As a new phenomenon we also show in a Potts example that, for a degenerate non-reversible chain this CLT approximation is not enough and the metastate can have less symmetry than the symmetry of the interaction and a Gaussian approximation of disorder fluctuations would suggest.Comment: 20 pages, 2 figure

    Design of experiments in medical physics: Application to the AAA beam model validation

    Get PDF
    Purpose The purpose of this study is to evaluate the usefulness of the design of experiments in the analysis of multiparametric problems related to the quality assurance in radiotherapy. The main motivation is to use this statistical method to optimize the quality assurance processes in the validation of beam models. Method Considering the Varian Eclipse system, eight parameters with several levels were selected: energy, MLC, depth, X, Y1 and Y2 jaw dimensions, wedge and wedge jaw. A Taguchi table was used to define 72 validation tests. Measurements were conducted in water using a CC04 on a TrueBeam STx, a TrueBeam Tx, a Trilogy and a 2300IX accelerator matched by the vendor. Dose was computed using the AAA algorithm. The same raw data was used for all accelerators during the beam modelling. Results The mean difference between computed and measured doses was 0.1 ± 0.5% for all beams and all accelerators with a maximum difference of 2.4% (under the 3% tolerance level). For all beams, the measured doses were within 0.6% for all accelerators. The energy was found to be an influencing parameter but the deviations observed were smaller than 1% and not considered clinically significant. Conclusion Designs of experiment can help define the optimal measurement set to validate a beam model. The proposed method can be used to identify the prognostic factors of dose accuracy. The beam models were validated for the 4 accelerators which were found dosimetrically equivalent even though the accelerator characteristics differ

    A new robust statistical method for treatment planning systems validation using experimental designs

    Get PDF
    Introduction Dose computation verification is an important part of acceptance testing. The IAEA Tecdoc 1540 and 1583 suggest comparing computed dose to measurements for several beam configurations. However, this process is time-consuming and results out of tolerance are often left unexplained. Purpose To validate a treatment planning system using experimental designs which allow evaluating several parameters in a few tests selected by a robust statistical method. Materials and methods The Taguchi table L36 (211 × 312) was used to determine the 72 beams needed to test the 7 parameters chosen: energy, MLC, depth, jaw field size in X, Y1 and Y2 directions and wedge. Measurements were conducted in water using a CC04 (IBA) on a TrueBeam STx, a TrueBeam Tx, a Trilogy and a C-serie clinac (Varian). Dose was computed using the AAA algorithm (Eclipse, version 11). The same raw data was used for all accelerators during the algorithm configuration. Results The mean difference between computed and measured doses was 0.1 ±± 0.5% for all tested beams and all linacs with a maximum difference of 2.4% (under the 3% tolerance level). For all beams, the measured doses were within 0.6% for all linacs. No studied parameter led to statistically significant deviation between computed and measured doses. Conclusion Experimental design is a robust statistical method to validate an algorithm. Only 2 h of measurements were needed to evaluate 7 parameters. Furthermore, the commissioned accelerators were found dosimetrically equivalent even though the linac characteristics differ

    Networks become navigable as nodes move and forget

    Get PDF
    We propose a dynamical process for network evolution, aiming at explaining the emergence of the small world phenomenon, i.e., the statistical observation that any pair of individuals are linked by a short chain of acquaintances computable by a simple decentralized routing algorithm, known as greedy routing. Previously proposed dynamical processes enabled to demonstrate experimentally (by simulations) that the small world phenomenon can emerge from local dynamics. However, the analysis of greedy routing using the probability distributions arising from these dynamics is quite complex because of mutual dependencies. In contrast, our process enables complete formal analysis. It is based on the combination of two simple processes: a random walk process, and an harmonic forgetting process. Both processes reflect natural behaviors of the individuals, viewed as nodes in the network of inter-individual acquaintances. We prove that, in k-dimensional lattices, the combination of these two processes generates long-range links mutually independently distributed as a k-harmonic distribution. We analyze the performances of greedy routing at the stationary regime of our process, and prove that the expected number of steps for routing from any source to any target in any multidimensional lattice is a polylogarithmic function of the distance between the two nodes in the lattice. Up to our knowledge, these results are the first formal proof that navigability in small worlds can emerge from a dynamical process for network evolution. Our dynamical process can find practical applications to the design of spatial gossip and resource location protocols.Comment: 21 pages, 1 figur

    Online detection and sorting of extracellularly recorded action potentials in human medial temporal lobe recordings, in vivo

    Get PDF
    Understanding the function of complex cortical circuits requires the simultaneous recording of action potentials from many neurons in awake and behaving animals. Practically, this can be achieved by extracellularly recording from multiple brain sites using single wire electrodes. However, in densely packed neural structures such as the human hippocampus, a single electrode can record the activity of multiple neurons. Thus, analytic techniques that differentiate action potentials of different neurons are required. Offline spike sorting approaches are currently used to detect and sort action potentials after finishing the experiment. Because the opportunities to record from the human brain are relatively rare, it is desirable to analyze large numbers of simultaneous recordings quickly using online sorting and detection algorithms. In this way, the experiment can be optimized for the particular response properties of the recorded neurons. Here we present and evaluate a method that is capable of detecting and sorting extracellular single-wire recordings in realtime. We demonstrate the utility of the method by applying it to an extensive data set we acquired from chronically-implanted depth electrodes in the hippocampus of human epilepsy patients. This dataset is particularly challenging because it was recorded in a noisy clinical environment. This method will allow the development of closed-loop experiments, which immediately adapt the experimental stimuli and/or tasks to the neural response observed.Comment: 9 figures, 2 tables. Journal of Neuroscience Methods 2006 (in press). Journal of Neuroscience Methods, 2006 (in press

    Credit Risk Model with Lagged Information

    No full text

    Credit Risk Model with Lagged Information

    No full text
    corecore