33,420 research outputs found

    Fluid passage-time calculation in large Markov models

    Get PDF
    Recent developments in the analysis of large Markov models facilitate the fast approximation of transient characteristics of the underlying stochastic process. So-called fluid analysis makes it possible to consider previously intractable models whose underlying discrete state space grows exponentially as model components are added. In this work, we show how fluid approximation techniques may be used to extract passage-time measures from performance models. We focus on two types of passage measure: passage-times involving individual components; as well as passage-times which capture the time taken for a population of components to evolve. Specifically, we show that for models of sufficient scale, passage-time distributions can be well approximated by a deterministic fluid-derived passage-time measure. Where models are not of sufficient scale, we are able to generate approximate bounds for the entire cumulative distribution function of these passage-time random variables, using moment-based techniques. Finally, we show that for some passage-time measures involving individual components the cumulative distribution function can be directly approximated by fluid techniques

    History of Soil Geography in the Context of Scale

    Get PDF
    We review historical soil maps from a geographical perspective, in contrast to the more traditional temporal–historical perspective. Our geographical approach examines and compares soil maps based on their scale and classification system. To analyze the connection between scale in historical soil maps and their associated classification systems, we place soil maps into three categories of cartographic scale. We then examine how categories of cartographic scale correspond to the selection of environmental soil predictors used to initially create the maps, as reflected by the maps\u27 legend. Previous analyses of soil mapping from the temporal perspective have concluded that soil classification systems have co-evolved with gains in soil knowledge. We conclude that paradigm shifts in soil mapping and classification can be better explained by not only their correlation to historical improvements in scientific understanding, but also by differences in purpose for mapping, and due to advancements in geographic technology. We observe that, throughout history, small cartographic scale maps have tended to emphasize climate–vegetation zonation. Medium cartographic scale maps have put more emphasis on parent material as a variable to explain soil distributions. And finally, soil maps at large cartographic scales have relied more on topography as a predictive factor. Importantly, a key characteristic of modern soil classification systems is their multi-scale approach, which incorporates these phenomena scales within their classification hierarchies. Although most modern soil classification systems are based on soil properties, the soil map remains a model, the purpose of which is to predict the spatial distributions of those properties. Hence, multi-scale classification systems still tend to be organized, at least in part, by this observed spatial hierarchy. Although the hierarchy observed in this study is generally known in pedology today, it also represents a new view on the evolution of soil science. Increased recognition of this hierarchy may also help to more holistically combine soil formation factors with soil geography and pattern, particularly in the context of digital soil mapping

    Non-destructive spatial heterodyne imaging of cold atoms

    Get PDF
    We demonstrate a new method for non-destructive imaging of laser-cooled atoms. This spatial heterodyne technique forms a phase image by interfering a strong carrier laser beam with a weak probe beam that passes through the cold atom cloud. The figure of merit equals or exceeds that of phase-contrast imaging, and the technique can be used over a wider range of spatial scales. We show images of a dark spot MOT taken with imaging fluences as low as 61 pJ/cm^2 at a detuning of 11 linewidths, resulting in 0.0004 photons scattered per atom.Comment: text+3 figures, submitted to Optics Letter

    Prospects for Measuring Cosmic Microwave Background Spectral Distortions in the Presence of Foregrounds

    Full text link
    Measurements of cosmic microwave background spectral distortions have profound implications for our understanding of physical processes taking place over a vast window in cosmological history. Foreground contamination is unavoidable in such measurements and detailed signal-foreground separation will be necessary to extract cosmological science. We present MCMC-based spectral distortion detection forecasts in the presence of Galactic and extragalactic foregrounds for a range of possible experimental configurations, focusing on the Primordial Inflation Explorer (PIXIE) as a fiducial concept. We consider modifications to the baseline PIXIE mission (operating 12 months in distortion mode), searching for optimal configurations using a Fisher approach. Using only spectral information, we forecast an extended PIXIE mission to detect the expected average non-relativistic and relativistic thermal Sunyaev-Zeldovich distortions at high significance (194σ\sigma and 11σ\sigma, respectively), even in the presence of foregrounds. The Λ\LambdaCDM Silk damping μ\mu-type distortion is not detected without additional modifications of the instrument or external data. Galactic synchrotron radiation is the most problematic source of contamination in this respect, an issue that could be mitigated by combining PIXIE data with future ground-based observations at low frequencies (ν<15−30\nu < 15-30GHz). Assuming moderate external information on the synchrotron spectrum, we project an upper limit of ∣μ∣<3.6×10−7|\mu| < 3.6\times 10^{-7} (95\% c.l.), slightly more than one order of magnitude above the fiducial Λ\LambdaCDM signal from the damping of small-scale primordial fluctuations, but a factor of ≃250\simeq 250 improvement over the current upper limit from COBE/FIRAS. This limit could be further reduced to ∣μ∣<9.4×10−8|\mu| < 9.4\times 10^{-8} (95\% c.l.) with more optimistic assumptions about low-frequency information. (Abridged)Comment: (16 pages, 11 figures, submitted to MNRAS. Fisher code available at https://github.com/mabitbol/sd_foregrounds. Updated with published version.

    Solving the Jitter Problem in Microwave Compressed Ultrafast Electron Diffraction Instruments: Robust Sub-50 fs Cavity-Laser Phase Stabilization

    Full text link
    We demonstrate the compression of electron pulses in a high-brightness ultrafast electron diffraction (UED) instrument using phase-locked microwave signals directly generated from a mode-locked femtosecond oscillator. Additionally, a continuous-wave phase stabilization system that accurately corrects for phase fluctuations arising in the compression cavity from both power amplification and thermal drift induced detuning was designed and implemented. An improvement in the microwave timing stability from 100 fs to 5 fs RMS is measured electronically and the long-term arrival time stability (>>10 hours) of the electron pulses improves to below our measurement resolution of 50 fs. These results demonstrate sub-relativistic ultrafast electron diffraction with compressed pulses that is no longer limited by laser-microwave synchronization.Comment: Accepted for publication in Structural Dynamic

    Fluid-flow solutions in PEPA to the state space explosion problem

    No full text
    Achieving the appropriate performance requirements for computer-communication systems is as important as the correctness of the end-result. This is particularly difficult in the case of massively parallel computer systems such as the clusters of PCs behind the likes of Google and peer-to-peer filesharing networks such as Bittorrent. Measuring the performance of such systems using a mathematical model is invariably computationally intensive. Formal modelling techniques make possible the derivation of such performance measures but currently suffer from the state-space explosion problem, that is, models become intractably large even for systems of apparently modest complexity. This work develops a novel class of techniques aimed at addressing this problem by approximating a representation of massive state spaces as more computationally-tractable real variables (fluid-flow analysis)

    ODE-based general moment approximations for PEPA

    No full text
    In this paper we show how the powerful ODE-based fluid-analysis technique for the stochastic process algebra PEPA is an approximation to the first moments of the counting processes in question. For a large class of models this approximation has a particularly simple form and it is possible to make qualitative statements regarding how the quality of the approximation varies for different parameters. Furthermore, this particular point of view facilitates a natural generalisation to higher order moments. This allows modellers to approximate, for instance, the variance of the component counts. In particular, we show how systems of ODEs facilitating the approximation of arbitrary moments of the component counting processes can be naturally defined. The effectiveness of this generalisation is illustrated by comparing the results with those obtained through stochastic simulation for a particular case study
    • …
    corecore